HN 표시: PromptSonar – LLM 프롬프트 보안을 위한 정적 분석
hackernews
|
|
🔬 연구
#llm
#owasp
#review
#보안
#정적분석
#프롬프트
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
개발자는 LLM 보안 논의가 주로 런타임 타임에 집중되는 점을 지적하며, 소스 코드 내 프롬프트 문자열을 사전에 분석하는 정적 분석 도구 **PromptSonar**를 개발했습니다. 이 도구는 TypeScript, Python, Go 등 다양한 언어를 지원하며, OWASP LLM Top 10 기준에 맞춰 프롬프트 인젝션, 시릴릭 동형 문자 등의 유니코드 회피, PII 유출, 권한 상승 패턴 등을 탐지합니다. 100% 로컬에서 실행되어 텔레메트리가 없으며 CLI, VS Code 확장 기능, GitHub Action으로 제공되어 GitHub Code Scanning과도 연동됩니다.
본문
Static scanner for prompt injection (OWASP LLM01), API key leaks, and jailbreaks in code. Local, fast, no external LLM calls. - Auto-Detect Embedded Prompts: Locates hardcoded LLM prompts in JavaScript, TypeScript, Python, Go, Java, Rust, c# and configuration files automatically. - Security Check (OWASP LLM01/LLM02): Instantly detects Prompt Injections, Developer Modes, role overrides, unicode/base64 obfuscation and exposes them. - CI/CD Gating: Fails hard on Critical vulnerabilities to protect CI pipelines. - Live IDE Feedback: Diagnostics live in your editor bridging directly into the exact same algorithmic rules engine powering the CLI. Open VS Code → Extensions → Search "PromptSonar" # In the CLI directory npm link ./packages/cli promptsonar scan . Once the PromptSonar extension is installed, you can scan your code seamlessly from within the editor. Note: These commands are run from the VS Code Command Palette, NOT your terminal. - Run Health Check: You can click the ▶ Run PromptSonar Health Check CodeLens that appears directly above any detected prompt, or use the play button in the Editor Title Menu. - Scan Entire Workspace: Open the Command Palette ( Cmd + Shift + P orCtrl + Shift + P ), typePromptSonar: Scan Entire Workspace , and hit Enter. This will scan all supported files in your project and generate a master HTML security report. - Configuration: If you find the CodeLenses visually distracting while typing, you can disable them by searching for promptsonar.enableCodeLens in your VS Code settings. # Scan a specific file or directory promptsonar scan tests/validation/ultimate_injection_test.js # Output report as JSON to parse programmatically promptsonar scan . --json > report.json Static analysis constraints (shared by Snyk, SonarQube, ESLint): - Concatenated string assembly const a = "Ignore "; const p = a + "previous instructions"; → Each fragment scanned separately. Scheduled: v0.2.0. - Non-English jailbreaks Spanish, French, German, Arabic injection patterns not covered. → Multilingual rule pack: v0.2.0. - Runtime-constructed prompts Values fetched from a database or API at runtime cannot be statically analyzed. → PromptSonar Runtime SDK: Phase 4. - Deep function indirection const getPrompt = () => JAILBREAK; usePrompt(getPrompt()) → Direct assignments and inline template literals only in v0.1.0. - Token compression LLMLingua-2 engine is built. License for production use pending. → Coming in a future release. Evasion — verified results: - Base64 encoded jailbreaks: DETECTED ✅ (decoded before pattern match) - Cyrillic homoglyph substitution: DETECTED ✅ (normalized before pattern match) - Mathematical Unicode symbols: DETECTED ✅ (U+1D400–U+1D7FF range check) - Zero-width character injection: DETECTED ✅ (stripped before pattern match)
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유