Prompt Guard – 자격 증명이 AI API에 도달하기 전에 차단하는 MitM 프록시
hackernews
|
|
📦 오픈소스
#ai 딜
#ai 보안
#anthropic
#chatgpt
#claude
#github copilot
#gpt-4
#mitm 프록시
#openai
#데이터 유출 방지
#프롬프트 차단
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
**Prompt Guard**는 AI 코딩 도구나 API로 전송되는 프롬프트를 가로채어, API 키나 비밀번호와 같은 민감 정보가 타인 서버로 유출되기 전에 미리 경고를 주는 MITM 프록시 도구입니다. HTTPS 트래픽을 투명하게 검사하며 GitHub Copilot, OpenAI, Anthropic 등 12개의 기본 제공 규칙을 통해 자격 증명 및 개인 정보를 실시간으로 탐지합니다. 이 도구는 단일 바이너리로 실행되며 웹 대시보드와 SQLite 감사 로그를 제공하여 사용자가 정보 유출을 모니터링할 수 있도록 돕습니다.
본문
A lightweight HTTPS MITM proxy that intercepts prompts sent to AI coding assistants and APIs — blocking or redacting sensitive data before it leaves your machine. AI tools like GitHub Copilot, ChatGPT, and Claude receive your full editor context. That context can contain API keys, passwords, SSNs, internal IP addresses, and other secrets — sent to third-party servers without you noticing. Prompt Guard sits between your tools and the AI APIs, inspects every prompt in real time, and blocks or redacts sensitive data before it is forwarded. - HTTPS MITM proxy — transparent interception using a local CA cert - Real-time inspection — rules run on every prompt before it's forwarded - Block mode — request is rejected; the AI receives a "blocked" message instead - Redact mode — sensitive value is replaced with [REDACTED] before forwarding; the AI still responds - Web dashboard — live feed of all intercepted prompts with matched snippets and status - 12 built-in rules — credentials, PII, tokens, private keys - Live rule editing — change rule modes in the dashboard; changes are written back to rules.json instantly - SQLite persistence — full audit log across restarts - Single binary — no runtime dependencies Intercepts prompts sent to: | Service | Host | |---|---| | GitHub Copilot | *.githubcopilot.com | | OpenAI | api.openai.com | | Anthropic | api.anthropic.com | All other HTTPS traffic is tunnelled through unchanged. | Rule | Severity | Default Mode | |---|---|---| AWS Access Key (AKIA… ) | High | Block | | AWS Secret Key | High | Block | OpenAI API Key (sk-… ) | High | Block | Anthropic API Key (sk-ant-… ) | High | Block | GitHub Token (ghp_ , gho_ , …) | High | Block | | Private Key (PEM block) | High | Block | | Social Security Number | High | Block | | Credit Card Number | High | Block | | JWT Token | Medium | Track | | Generic Secret / Password assignment | Medium | Track | | Email Address | Low | Track | | Internal IP Address (RFC-1918) | Low | Track | Block — request is rejected; nothing is forwarded to the AI. Track — matched value is replaced with [REDACTED] in the forwarded request; the AI responds to the sanitised prompt. Rules can be switched between modes at any time from the dashboard without restarting. - Go 1.21+ - macOS, Linux, or Windows git clone https://github.com/chaudharydeepak/prompt-guard cd prompt-guard go build -o prompt-guard . ./prompt-guard On first run a local CA cert is generated and setup instructions are printed: ┌─────────────────────────────────────────┐ │ Prompt Guard starting │ └─────────────────────────────────────────┘ CA cert: /Users/you/.prompt-guard/ca.crt Install CA (run once): sudo security add-trusted-cert -d -r trustRoot \ -k /Library/Keychains/System.keychain ~/.prompt-guard/ca.crt Set proxy: export HTTP_PROXY=http://localhost:8080 export HTTPS_PROXY=http://localhost:8080 export NO_PROXY=localhost,127.0.0.1 Dashboard: http://localhost:7778 Rules file: /Users/you/.prompt-guard/rules.json The most reliable way is to set the proxy directly in VS Code settings (Cmd+, ): "http.proxy": "http://localhost:8080", "http.proxyStrictSSL": true Then restart VS Code. Traffic from all Copilot models (Claude, GPT-4o, etc.) will flow through the proxy. Node.js ignores the system keychain, so pass the CA cert explicitly: export NODE_EXTRA_CA_CERTS=~/.prompt-guard/ca.crt export HTTP_PROXY=http://localhost:8080 export HTTPS_PROXY=http://localhost:8080 export NO_PROXY=localhost,127.0.0.1 claude To avoid setting these every session, add them to your ~/.zshrc (or ~/.bashrc ). Rules are configured in ~/.prompt-guard/rules.json . The file is created automatically when you first change a rule mode in the dashboard. You can also create or edit it manually — changes take effect on the next proxy restart. Override a built-in rule (e.g. switch email from track to block): { "overrides": [ { "id": "email", "mode": "block" }, { "id": "jwt-token", "severity": "high" } ] } Add a custom rule: { "rules": [ { "id": "my-internal-token", "name": "Acme Internal Token", "description": "Internal service token format", "pattern": "ACME-[A-Z0-9]{32}", "severity": "high", "mode": "block" } ] } Changes made in the dashboard are written back to rules.json automatically and survive restarts. --port Proxy port (default: 8080) --web-port Dashboard port (default: 7778) --ca-dir Directory for CA cert, key, and database (default: ~/.prompt-guard) Your app (VS Code, curl, etc.) → HTTP_PROXY / HTTPS_PROXY → prompt-guard proxy (:8080) ├── Non-target hosts → blind tunnel (unchanged) └── Target hosts (OpenAI, Anthropic, Copilot) → TLS MITM (local CA cert) → parse JSON body → extract user prompt text → run rules ├── block match → reject request; return block message to client ├── track match → redact value in body; forward sanitised request └── clean → forward unchanged → store in SQLite (prompt, status, matched snippets) → web dashboard (:7778) reads SQLite prompt-guard/ ├── main.go CLI entrypoint ├── proxy/ │ ├── ca.go Local
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유