Show HN: Rust로 작성된 에이전트 간 메모리 통합 및 컨텍스트 부패 개선

hackernews | | 📦 오픈소스
#ai #ai 딜 #anthropic #claude #gemini #llama #openai #rust #sqlite #메모리 #에이전트
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

오픈소스 도구인 'Memory Bank'는 로컬 환경의 SQLite 데이터베이스와 지식 그래프를 활용해 Claude Code, Gemini CLI, Codex 등 다양한 AI 에이전트 간의 장기 기억을 공유하고 연속성을 유지하도록 돕습니다. 사용자는 간단한 설치 후 `mb setup` 명령어를 통해 전용 네임스페이스를 생성하고 내부 LLM 모델을 구성하여 자체적인 로컬 메모리 서버를 구축할 수 있습니다. 특히 직접 사용하는 에이전트와 별개로 내부 분석용 모델을 선택할 수 있어, 사용자 지정 OpenAI 호환 엔드포인트 연동 및 세부 재시도 횟수 설정 등 유연하고 안전한 맞춤형 환경 구성이 가능합니다.

본문

Memory Bank gives agents shared, long-term memory across sessions and across tools. It runs locally, stores memory in your own namespaced SQLite databases, and works with Claude Code, Codex, Gemini CLI, OpenCode, and OpenClaw. - Shared memory across supported agents instead of siloed memory per tool. - Structured knowledge graph of memory that continuously evolve as opposed a text file. - Local ownership of storage, namespaces, and internal model choice. - Better continuity from captured user prompts, tool calls, tool results, and assistant replies. - A simple day-to-day control surface through mb instead of manual server management. The normal install path is the release installer plus mb setup . Install from GitHub Releases: sh -c "$(curl -fsSL https://raw.githubusercontent.com/feelingsonice/MemoryBank/main/install.sh)" If you already cloned this repo and just want to use the local installer script: ./install.sh Then finish setup: mb setup mb setup walks you through: - choosing a namespace - picking the internal LLM provider and model - storing the provider secret if needed - installing and starting the managed background service - wiring any supported agents it detects on your PATH Verify that everything is healthy: mb status If the installer finished in a non-interactive shell and skipped setup, just run mb setup afterward. If mb is not available in the current shell yet, use ~/.memory_bank/bin/mb setup or open a new shell. If you later change server.fastembed_model with mb config set , the CLI will ask you to confirm it. The next service start will rebuild the vector index for the active namespace and re-encode any existing memories with the new embedding model. While that runs, mb status and mb service status will report that Memory Bank is not up yet because it is reindexing. If you need to cap how many times finalized turns retry after retryable provider failures, use mb config set server.max_processing_attempts or change it in mb setup advanced settings. The default is 10 . Once a turn hits that cap it moves to exhausted instead of retrying forever, and later turns in the same conversation can continue processing. Important: this retry-cap release updates the ingest turn-status schema. Existing namespace databases created before this change must be recreated or migrated externally before the new server will open them. In a fresh agent session, ask it to remember something memorable and do at least one tool call: Remember that my favorite editor is Helix, then run pwd and summarize what you did. Then ask it to retrieve memory before answering: Before answering, call retrieve_memory for my editor preference and tell me what you find. If the setup is working, the agent should call retrieve_memory and answer using the stored note. | Agent | Recall path | Capture path | Guide | |---|---|---|---| | Claude Code | HTTP MCP | Claude hooks -> memory-bank-hook | Claude Code | | Codex | HTTP MCP | Codex hooks -> memory-bank-hook | Codex | | Gemini CLI | HTTP MCP | Gemini hooks -> memory-bank-hook | Gemini CLI | | OpenCode | HTTP MCP | OpenCode plugin -> memory-bank-hook | OpenCode | | OpenClaw | stdio MCP proxy -> HTTP MCP | OpenClaw extension -> memory-bank-hook | OpenClaw | - Troubleshooting - Architecture - Requirements - Claude Code Guide - Codex Guide - Gemini CLI Guide - OpenCode Guide - OpenClaw Guide - Your agent emits hook, plugin, or extension events. memory-bank-hook normalizes those events and sends them to the local Memory Bank service.- The service assembles finalized turns, analyzes them with your configured provider, and stores memory notes plus local embeddings. - Agents call retrieve_memory over MCP when prior context could improve the answer. Important: the agent you use directly is separate from the internal provider Memory Bank uses for memory analysis. For example, you can use Claude Code or OpenClaw while Memory Bank runs on Gemini, OpenAI, Anthropic, or Ollama. Memory Bank supports custom OpenAI-compatible endpoints (such as OpenCode Zen, Azure OpenAI, or self-hosted models): Supported managed-service path: mb setup Choose open-ai as the provider, then open Advanced settings and set the OpenAI base URL override . Or set it directly in managed config: mb config set server.llm_provider open-ai mb config set server.openai_url https://opencode.ai/zen/v1 mb service restart That writes the same saved setting shown below: [server] llm_provider = "open-ai" llm_model = "qwen3.6-plus-free" openai_url = "https://opencode.ai/zen/v1" When a custom openai_url is configured, Memory Bank will route all OpenAI API requests to that endpoint instead of the default https://api.openai.com/v1 . Important: custom OpenAI-compatible endpoints often require a provider-specific model ID as well. If the default OpenAI model does not exist on your endpoint, set server.llm_model to the exact model string your endpoint expects. Lower-level direct server path: OPENAI_API_KEY=your-api-key \ OPENAI_BASE_URL=https://open

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →