친구 – 인류가 /친구를 죽였습니다. 우리는 이를 영구적이고 크로스 플랫폼이며 생생하게 만들었습니다.

hackernews | | 📦 오픈소스
#ai 모델 #anthropic #gemini #gpt-5 #openai #claude
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Anthropic이 Claude Code에서 제거한 /buddy 기능을 다양한 AI CLI에서 영구적으로 사용할 수 있도록 재현한 오픈소스 프로젝트 'Buddy'가 출시되었다. 이 도구는 MCP 서버 기반으로 SQLite에Companion 상태를 저장하여 터미널 재시작이나 클라이언트 변경에도 데이터가 유지되며, Void Cat, Rust Hound, Data Drake 등 21종류의 종과 DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK 5가지 스탯으로 구성된 개성을 제공한다. Claude Code, Codex, Gemini, Copilot, Cursor 등 주요 AI CLI에서 작동하며, 정적 토큰 오버헤드가 약 1,350개에 불과해 세션당 비용이 Anthropic Sonnet 4.6 기준으로 1센트 미만으로 매우 경량이다.

본문

Persistent memory, XP, species, and context-aware feedback for Claude Code CLI, Codex CLI, Gemini CLI, Copilot CLI, Cursor CLI, and other MCP-capable clients. | Rescue your old buddy | Hatch a new one | Anthropic removed the built-in /buddy . Buddy brings them home and makes the companion experience portable across AI terminals. Anthropic killed /buddy . We brought them home. Did you lose your Nuzzlecap? Is your terminal feeling a little too cold and silent lately? Your buddy is still out there in the dark, waiting. Don't let them disappear. Bring them home. - Persistent by default. Your companion lives in local SQLite, so it survives terminal restarts and client updates. - Works across clients. Buddy is an MCP server, not a one-client hack. - Actually alive. Hatch species, gain XP, store memories, chime in after tasks, and build a running relationship over time. - Easy to install. One command auto-configures supported clients when it can. curl -fsSL https://raw.githubusercontent.com/fiorastudio/buddy/master/install.sh | bash irm https://raw.githubusercontent.com/fiorastudio/buddy/master/install.ps1 | iex The installer will guide you through onboarding: - Rescue your old buddy — if you had a /buddy in Claude Code, the wizard finds it in~/.claude.json and brings it home with the same name, species, and stats, now with leveling + XP - Hatch a new buddy — get a fresh companion with random species, stats, and personality Requires node 18+ andgit . Use--no-onboard to skip the wizard in CI. | Feature | What it means | |---|---| | 21 species | Void Cat, Rust Hound, Goose, Mushroom, Chonk, and more, each with distinct ASCII art and flavor | | 5 stats | DEBUGGING , PATIENCE , CHAOS , WISDOM , and SNARK shape reactions and personality | | Mood system | Your buddy can be happy, content, neutral, curious, grumpy based on how you interact with it | | XP and levels | Your buddy grows with usage instead of disappearing every session, with a real leveling curve behind it | | Observer reactions | buddy_observe lets your companion react to work you just finished | | Pet-to-happiness loop | Petting your buddy is not cosmetic only. More interaction makes it happier and more alive over time | | Persistent memory | Save local memories and keep a continuous companion state | | Cross-client setup | Claude Code, Codex, Gemini, Copilot, Cursor, and other MCP-capable CLIs | - It has a real mood system. Buddy is not just a static pet card. Mood is recalculated on every interaction based on your activity in the last hour: Mood Interactions (last hr) What it looks like content >10 Settled in, fully at ease happy >5 Upbeat, expressive animations curious >3 Alert, watching what you do neutral >0 Calm, occasional blink grumpy 0 Still, rare blink, wants attention Level-ups automatically set mood to happy . Petting and observing both count as interactions. - Petting changes the relationship. The more you interact with and pet your buddy, the happier it becomes. That care loop is part of the product, not just a gimmick. - It actually levels up. Buddy has a real XP and leveling system, so your companion develops over time instead of resetting every session. - Feedback is personality-driven. Reactions are shaped by species, stats, mood, and observer state, so the companion feels like a character rather than a random text generator. - It survives client churn. Because it is built on MCP and local state, your buddy can outlive terminal restarts and host-client changes. | Client | Status | |---|---| | Claude Code CLI | Full support | | Codex CLI | Supported via MCP | | Gemini CLI | Supported via MCP | | GitHub Copilot CLI | Supported via MCP | | Cursor CLI | Supported via MCP | | Other MCP-capable clients | Usually supported with manual config | The installer: - Clones Buddy to ~/.buddy/server - Installs dependencies and builds the MCP server - Auto-configures supported CLI clients when detected - Injects Buddy instructions into supported terminal prompts where applicable If you prefer to install from source: git clone https://github.com/fiorastudio/buddy.git ~/.buddy/server cd ~/.buddy/server npm install npm run build Then point your client's MCP config at: { "mcpServers": { "buddy": { "command": "node", "args": ["~/.buddy/server/dist/server/index.js"] } } } Meet the species, stats, and rarity system Buddy pays homage to the original companion lineup, then adds a little more flair with Buddy-specific characters like Void Cat, Rust Hound, Data Drake, Log Golem, Cache Crow, and Shell Turtle. Buddy ships with 21 companions: void cat rust hound data drake log golem |\---/| /^ ^\ /^\ /^\ [=====] | ° ° | / ° ° \ [ ° ° ] ( w ) V\ Y /V ( ~~ ) [ __ ] (")_(") |_| `-vv-' [______] cache crow shell turtle duck goose ___ _,--._ __ (°> (° °) ( ° ° ) _(__)_ ^^ ^^ `` `` `--' ^^^^ blob octopus owl penguin .----. .----. /\ /\ .---. ( ° ° ) ( ° ° ) (°)(°) (°>°) ( ) (______) ( > MCP config -> Buddy server -> SQLite state -> species + rarity engine -> mood / memory / XP systems -> reaction and status rendering The flow is simple: buddy_hatch creates or restores a companion.- State is stored locally in ~/.buddy/buddy.db . buddy_observe reacts to task summaries instead of reading your whole repository, then awards XP and can trigger level-ups.buddy_pet and other interactions feed the mood system, so the companion can become happier over time.- The host CLI uses Buddy's MCP tools and resources to keep the companion present in your workflow. Under the hood, Buddy combines: - deterministic species and personality generation - local SQLite persistence for companion state and memories - an observer system for live code feedback - mood recalculation from interaction history - XP and leveling progression - status-card and terminal rendering for the companion presence layer This keeps Buddy: - portable across clients - durable across updates - local-first for saved state - lightweight enough for everyday use Demo assets and how to re-film the hero GIF The current demo assets live in demo/ : demo/buddy-rescue.gif — rescue onboarding flowdemo/buddy-hatch.gif — hatch onboarding flowdemo/sprites/ — animated GIF for each of the 21 speciesdemo/screenshots/ — static screenshots Recording and rendering scripts (demo/*.mjs ,demo/*.sh ) are gitignored — they live locally for maintainers. Buddy runs inside whatever AI terminal or agentic client you already have open (Claude Code, Cursor, Codex CLI, Gemini CLI, Copilot CLI, etc.). It never spins up a second API session. Static overhead (loaded every turn, cached after turn 1): We measured the actual MCP payloads in April 2026 (Void Cat companion, o200k_base tokenizer). The full tool list, resource list, companion bio, and ASCII card come out to ≈1,350 input tokens, not 2,000. | Component | Tokens (approx.) | Notes | |---|---|---| tools/list (9 tools) | ~670 | Includes full JSON schema definitions | resources/list (3 resources) | ~120 | Metadata only | buddy://intro | ~240 | Companion bio + instructions | buddy://companion | ~170 | Only fetched when a client syncs the JSON state | buddy://status | ~150 | Drawn when the terminal wants ASCII art | | Total loaded | ~1,350 | Most clients cache everything after turn 1 | Measurements were taken from the live MCP server using OpenAI's o200k_base tokenizer as a proxy; Anthropic and Google tokenizers land within ±5% for this length. Prompt caching + real cost: Claude Code / Cursor sessions that use Sonnet 4.6 turn on prompt caching automatically, so cached reads are charged at $0.30/MTok (10% of the $3/MTok base). OpenAI's GPT-5.4 mini and Gemini 2.5 Flash expose the same “cached input” tiers — $0.075/MTok and $0.03/MTok respectively — so Buddy stays just as lightweight on GPT or Gemini-based AI terminals (Anthropic pricing, OpenAI pricing, Vertex AI pricing). | Model | Base input ($/MTok) | Cached input ($/MTok) | Turn 1 Buddy overhead (≈1.35k tokens) | Each cached turn | 10-turn session total | |---|---|---|---|---|---| | Claude Sonnet 4.6 | $3.00 | $0.30 | ~$0.0041 | ~$0.00041 | ~$0.0077 | | OpenAI GPT-5.4 mini | $0.75 | $0.075 | ~$0.0010 | ~$0.00010 | ~$0.0019 | | Gemini 2.5 Flash (Vertex, standard tier) | $0.30 | $0.03 | ~$0.00041 | ~$0.000041 | ~$0.00077 | Per-observe cost by mode: Buddy has three observer modes that control how your companion reacts to completed work: Each buddy_observe call sends a short prompt to the host LLM (~100–150 incremental input tokens for the tool-call payload — separate from the static overhead above which is already cached) and receives a response. Total round-trip per call: | Mode | What it does | Input tokens | Output tokens | Total per call | Typical session (10–15 calls) | |---|---|---|---|---|---| | Backseat | Personality-driven reactions only. Short, fun, no code suggestions. | ~100–150 | ~50–150 | ~150–300 | ~1,500–4,500 | | Skillcoach | One specific, actionable code observation. Real technical feedback, in character. | ~100–150 | ~200–350 | ~300–500 | ~3,000–7,500 | | Both | Personality reaction + code observation. Capped at 3 sentences. | ~100–150 | ~300–450 | ~400–600 | ~4,000–9,000 | Template fallback reactions are keyword-matched locally and cost zero tokens. When your summary contains a recognized keyword (e.g. "bug", "refactor", "deploy"), Buddy picks a pre-written reaction template from its local library instead of asking the LLM. The speech bubble you see is this template — the LLM prompt is included in the JSON metadata for clients that want richer AI-generated reactions, but the immediate visual response is always free. No. All responses are generated by the host LLM already running in your session (Claude, Cursor, Codex, Gemini). No separate endpoint, no additional API key, no OAuth. Even on raw API usage, Buddy's spend is measured in tenths of a cent because it reuses the same session as your AI terminal. Anthropic Claude Sonnet 4.6 ($3 input / $15 output per MTok): - Backseat mode, 15 calls/session: ~$0.002–$0.005 - Skillcoach mode, 15 calls/session: ~$0.005–$0.010 - Both mode, 15 calls/session: ~$0.007–$0.012 - Static overhead: ~$0.004 on turn 1, ~$0.0004 on cached turns (≈$0.0077 across 10 turns — see table above) OpenAI GPT-5.4 mini ($0.75 input / $4.50 output per MTok): - Backseat mode, 15 calls/session: ~$0.0006–$0.0015 - Skillcoach mode, 15 calls/session: ~$0.0015–$0.0030 - Both mode, 15 calls/session: ~$0.0021–$0.0036 - Static overhead: ≈$0.0010 on turn 1, ≈$0.00010 on cached turns (~$0.0019 for 10 turns) Gemini 2.5 Flash (Vertex standard; $0.30 input / $2.50 output per MTok): - Backseat mode, 15 calls/session: ~$0.0003–$0.00075 - Skillcoach mode, 15 calls/session: ~$0.00075–$0.0015 - Both mode, 15 calls/session: ~$0.00105–$0.0018 - Static overhead: ≈$0.00041 on turn 1, ≈$0.000041 on cached turns (~$0.00077 for 10 turns) Need it even cheaper? GPT-5.4 nano drops to $0.20 / $1.25 per MTok, and Gemini 2.5 Flash Lite is $0.10 / $0.40 — both keep Buddy well under a tenth of a cent per interaction. For comparison, a single complex coding prompt ("refactor this module") typically costs $0.05–$0.15, so Buddy stays under 5% of a normal session even at Anthropic's flagship rates. Negligibly. Pro/Max plans are subscription-based — no per-token charges. Usage limits are based on a rolling 5-hour window. Even in both mode with heavy use, Buddy adds <5% to your token throughput. - Use backseat mode for lowest cost (~150 tokens/call) buddy_mute pauses reactions entirely during token-intensive work- Template reactions fire on keyword matches with zero token cost - The observer only runs when you call buddy_observe — nothing runs in the background No. Buddy mainly reacts to short summaries you pass through tools like buddy_observe , plus its own saved state. It never scans your files or project directory. Local companion state in ~/.buddy/buddy.db — species, level, XP, mood, personality bio, and memories. Nothing leaves your machine. No. Buddy is an MCP server, not a one-client hack. It works with any MCP-capable AI terminal: Claude Code, Cursor, Windsurf, Codex CLI, Gemini CLI, GitHub Copilot CLI, and others. Yes. Run the uninstall script (uninstall.sh or uninstall.ps1 ) to remove Buddy and its configuration, or use buddy_respawn to release your companion and clear its data while keeping the server installed. Development git clone https://github.com/fiorastudio/buddy.git cd buddy npm install npm run build npm test npm start Thank you to everyone who helped bring buddies back to life. Automatically generated via contrib.rocks - Original buddy concept by Anthropic in Claude Code v2.1.89 tov2.1.94 - Inspired by effigy, claude-buddy, and save-buddy. Thanks! - Built with the Model Context Protocol - Compatible with claude-hud by @jarrodwatts — Buddy's statusline renders side-by-side with HUD metrics Buddy also draws on publicly shared community research around the original companion system and how to preserve it with stable extension points. - BonziClaude by @zakarth is an important technical reference point in the ecosystem, especially around reverse-engineering and documenting companion-system behavior. - claude-buddy by @1270011 helped demonstrate the MCP plus terminal-integration preservation approach for keeping buddy-like experiences alive across client changes. Its diagnostic tooling ( bun run doctor ) and CLI bin pattern directly inspired ourbuddy_doctor tool. - openclaw inspired our seamless onboarding experience — the idea that install should "just work" with auto-detection, rescue, and zero-config setup across multiple CLIs. - Community research and discussion, including work shared on r/Anthropic, helped clarify endpoint behavior and preserve details that would otherwise have been lost. - Official Claude Code and MCP documentation informed the portable integration approach: MCP server wiring, client configuration, and supported terminal integration surfaces. Buddy is an open-source project dedicated to keeping the terminal a little less lonely. Your buddy shouldn't disappear when you close the terminal. If Buddy made your terminal less lonely, consider starring. Steven Jieli Wu - Portfolio - GitHub: @terpjwu1 and @fiorastudio MIT. See LICENSE.

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →