HN 표시: Botference – Claude Code 및 Codex를 동시에 사용하여 계획을 세우는 TUI
hackernews
|
|
📦 오픈소스
#ai 딜
#anthropic
#claude
#codex
#command r
#multi-llm
#openai
#tui
#개발도구
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
Botference는 사용자가 클로드(Claude) 코드와 코덱스(Codex) 두 AI 모델과 동시에 협업하여 프로젝트를 기획할 수 있도록 돕는 다중 LLM 터미널 유저 인터페이스(TUI) 도구입니다. 사용자가 직접 방향을 제시하는 개방형 'Council(공개 회의)' 방식과, AI들끼리 자체적으로 토론하며 의견을 조율하는 'Caucus(비밀 회의)' 기능을 모두 지원하며, 최종적으로 implementation-plan.md 등의 구체적인 계획 파일을 생성합니다. 기본적으로 파이썬 기반의 Textual과 노드 기반의 Ink 두 가지 TUI 백엔드를 제공하며, 계획 수립 기능은 안정적으로 작동하지만 자동화된 빌드 모드는 아직 초기 실험 단계입니다.
본문
Note This README was AI-generated. I (Angadh) have skimmed and supervised its creation. Multi-LLM planning where you and the AIs collaborate in a council (open room, you steer) or the AIs hash things out in a caucus (private sidebar, they converge). The result is an implementation-plan.md and checkpoint.md you can take into any workflow. The primary contribution of this repository is plan mode — a multi-LLM planning session where Claude and Codex collaborate in real time. This is the part that works well and is ready for use. Warning This is vibe-coded. Use at your own risk. Research-plan mode and build mode are deeply experimental — under active development, will change without notice, and not recommended for general use. If you are here to get things done, use ./botference plan and take the resulting plan into your own workflow. Ink TUI — council panel (left), caucus panel (right), input field and status line at the bottom: Textual TUI — same layout, Python-based (default): The TUI has two backends: | Flag | Backend | Notes | |---|---|---| --textual | Textual (Python) | Default. No extra install needed. | --ink | Ink (Node.js/React) | Requires npm install in ink-ui/ . Supports multiline input (Shift+Enter). | Both present the same council + caucus interface. Use --claude to skip Codex and run a solo Claude session (no TUI, just the Claude CLI). Botference uses two metaphors for multi-LLM collaboration, shown as two panels in the TUI: - Council — an open room where you and the AIs all talk. You steer the conversation, ask questions, push back, and direct who speaks. This is plan mode: you're in the room with Claude and Codex, hashing out what to build. - Caucus — a private sidebar where the AIs talk to each other without you. You kick it off with /caucus and the models debate, negotiate, and converge on a recommendation. You get the summary and decide what to do with it. The council is where decisions get made. The caucus is where the AIs work out their disagreements so they can bring you a coherent proposal instead of conflicting opinions. # Planning (the main event) ./botference plan # Council: you + Claude + Codex ./botference plan --claude # Solo Claude (no Codex) ./botference plan --ink # Use Ink TUI instead of Textual # Building (experimental) ./botference build # Interactive build loop ./botference -p build # Headless build loop ./botference -p build 10 # Headless, max 10 iterations ./botference -p build --parallel # Phase-level parallelism ./botference --help # Full usage + supported models API keys: If your terminal already has Claude Code and Codex configured via subscription accounts (e.g. Claude Max, OpenAI Plus), Botference works out of the box with no extra setup. In build mode, subscription users run through the MCP fallback path ( fallback_agent_mcp.py →claude -p ).If you provide an API key, build mode uses the direct agent runner ( botference_agent.py ) instead, which owns the tool-calling loop and gives finer control over retries, context tracking, and token accounting. Copy the example env file and add your keys:cp .env.example .env # Then edit .env with your ANTHROPIC_API_KEY and/or OPENAI_API_KEY Botference will auto-load keys from .env when they are not already in your environment. Do not commit.env to version control. Type freely to send a message. By default your first message goes to both models; after that, messages are sticky to whoever you last addressed. | Input | Effect | |---|---| @all | Send to both Claude and Codex | @claude | Send to Claude only | @codex | Send to Codex only | | Auto-routed (first message → @all, then sticky to last target) | Messages in the council panel are labelled by speaker: - Claude — Claude's responses - Codex — Codex's responses - You — your messages - System — the framework talking to you, not an LLM. Covers: - Session lifecycle (starting, relaying, or tearing down a model) - Mode changes (caucus started, draft complete) - Errors and warnings - Command feedback (lead set, usage info) - Approval prompts ("Write plan? [y/n]") | Command | What it does | |---|---| /caucus | Start a caucus — Claude and Codex debate the topic privately (3-5 rounds) and return a summary with a recommendation. If they agree on a writer, the lead is set automatically. | /lead @claude|@codex | Manually set which model writes the plan. You can also use /lead auto to let a future caucus decide. | /draft | The lead model drafts a plan based on the conversation so far. No files are written yet. Requires a lead (set one manually or let /caucus pick). | /finalize | The lead drafts, the other model reviews, then plan files (implementation-plan.md , checkpoint.md ) are written after your approval. | /relay @claude|@codex | Tear down a model's session and bootstrap a fresh one with a structured handoff. Useful when context is getting long. | /status | Show context usage, lead, mode, and session state. | /help | Show the command reference. | /quit | Exit without writing files. | Typi
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유