Claude/Codex/Cursor의 출력을 "삼키고" 재사용하는 도구
hackernews
|
|
📦 오픈소스
#ai 딜
#anthropic
#claude
#cli
#codex
#gpt-4
#llama
#openai
#개발도구
#컨텍스트관리
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
Swallow는 코드를 직접 작성하지 않고 프로젝트 맥락과 코딩 에이전트의 결과물을 수집하여, 다음 개발 세션을 위한 구조화된 계획을 생성하는 CLI 도구입니다. Go 1.22 이상 환경에서 설치하며, OpenAI 및 Anthropic과 호환되는 API를 통해 작동하고 모든 데이터는 로컬 SQLite 데이터베이스에 저장됩니다. 사용자는 소스 코드를 인덱싱하고 명령어를 입력하면 명확한 목표와 실행 계획, 검증 목록이 포함된 브리핑을 생성하여 코딩 에이전트에게 바로 전달할 수 있습니다.
본문
A project-based CLI tool that helps you continue software development sessions more effectively. Swallow does not write production code. It acts as a planning and context-orchestration layer around coding agents — collecting your project context, ingesting agent outputs, and generating a structured next-session brief you can paste directly into another coding agent. Swallow plans. Another coding agent executes. go install github.com/vule022/swallow/cmd/swallow@latest Requires Go 1.22+. No CGO required. # 1. Initialise swallow swallow init # 2. Create a project from your working directory cd /path/to/your/project swallow project init --name myproject --summary "My awesome project" # 3. Index your codebase swallow ingest . # 4. Set your API key (OpenAI-compatible) export SWALLOW_API_KEY=sk-... # 5. Generate your next session brief swallow spit "fix the auth flow" Initialise swallow configuration and storage (~/.swallow/ ). Create a new project rooted at the current directory. swallow project init --name api-server --summary "REST API with auth and billing" List all projects. The active project is marked with * . Set the active project by name or ID. Show the active project, indexed file count, and recent session history. Ingest a file or directory into the active project. swallow ingest . # ingest current directory swallow ingest src/ # ingest a subdirectory swallow ingest README.md # ingest a single file Ingest a coding agent output (text/markdown) as structured context. swallow ingest-output agent_output.md cat output.txt | swallow ingest-output --stdin Swallow will attempt to extract goals, decisions, blockers, and next actions from the text using the LLM. Generate a structured next coding session brief. swallow spit "fix the auth flow" swallow spit "cleanup ingest and storage layer" swallow spit "add better export" --detail-level detailed swallow spit "quick fix" --compact-only swallow spit "refactor" --model claude-3-5-sonnet-20241022 Flags: --detail-level compact|standard|detailed — controls plan depth (default: standard)--compact-only — print only the copy-ready prompt block--full — generate the most detailed plan possible--model — override the LLM model for this run Output includes: - Session title - Primary goal - Why now - Current context - Relevant files - Execution plan - Constraints / non-goals - Validation checklist - Expected deliverable - Copy-ready prompt block Export a handoff document based on recent project context. swallow export swallow export --full swallow export --compact Verify configuration, API key, database, and active project. swallow doctor Configuration is stored at ~/.swallow/config.json . { "provider": "openai", "model": "gpt-4o", "base_url": "https://api.openai.com/v1", "default_project": "", "copy_mode": "print", "storage_backend": "sqlite", "max_tokens": 4096, "temperature": 0.3 } Swallow uses an OpenAI-compatible API client. Any provider with an OpenAI-compatible API works: { "provider": "anthropic", "model": "claude-3-5-sonnet-20241022", "base_url": "https://api.anthropic.com/v1" } Or with Ollama locally: { "provider": "ollama", "model": "llama3.2", "base_url": "http://localhost:11434/v1" } Always set via environment variable — never stored in config: export SWALLOW_API_KEY=sk-... - Ingest — Swallow recursively scans your project, ignoring noise (node_modules, .git, build artifacts, binaries). For each file it computes a SHA-256 hash for deduplication. - Context building — When you run spit , Swallow fetches your recent coding agent outputs, session history, and a relevance-scored set of document summaries. It builds a structured context prompt. - Planning — The context is sent to your configured LLM with a strict JSON schema. The model returns a structured plan with title, goal, execution steps, relevant files, and a copy-ready prompt. - Output — The plan is printed with clear section headers. The copy-ready prompt is printed inside >> delimiters for easy selection. If you run swallow from inside a directory that matches a known project root, swallow automatically selects that project — no need to run swallow project use every time. All data is stored locally in ~/.swallow/swallow.db (SQLite). Nothing is sent anywhere except to your configured LLM provider for planning operations. | Level | Context depth | Sections included | |---|---|---| compact | 2 outputs, 3 docs | title, goal, execution plan, copy prompt | standard | 5 outputs, 8 docs | + why now, context, files, constraints, validation | detailed | 10 outputs, 15 docs | all sections, maximum context | Swallow works with any coding agent. Use swallow watch and swallow hooks install to automate ingestion so you never need to copy-paste agent outputs manually. Watches ~/.swallow/inbox/ for new files and ingests them automatically. # In a separate terminal (or background): swallow watch # Now drop any agent output into the inbox: cp my_agent_output.md ~/.swallow/inbox/ # → swallow ingests it immediately Files are move
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유