Show HN: Collabmem – AI와의 장기적인 협업을 위한 메모리 시스템
hackernews
|
|
📦 오픈소스
#ai
#claude
#collaboration
#hn
#memory system
#tool
#ai 서비스
#메모리 시스템
#파일 기반
#협업 툴
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
collabmem은 인간과 AI 어시스턴트 간의 장기 협업을 돕는 파일 기반의 메모리 시스템으로, 벡터 데이터베이스나 별도 인프라 없이 일반 텍스트 파일과 git을 통한 버전 관리만으로 작동합니다. 이 시스템은 과거 작업 기록을 보존하는 '삽입 전용(Episodic)' 메모리와 프로젝트의 현재 상태를 업데이트하는 '세계 모델(World model)' 메모리를 활용하여, 컨텍스트 창의 한계를 극복하고 지속적인 지식을 축적합니다. 사용자는 'readmem', 'updatemem' 등의 특정 감시 토큰을 통해 메모리 읽기 및 업데이트를 제어할 수 있으며, 폴더 및 파일 단위의 심볼릭 링크를 활용해 팀원 간의 공유 및 자동 병합도 원활하게 지원합니다.
본문
collabmem is a simple, easy-to-use, yet powerful memory system enabling long-term collaboration between humans and AI assistants. It's easy to install as well! The memory is file-based and represented as plain text (easy to inspect), can be git-tracked (storage & versioning) and can be used to build up knowledge within a team of users. So no databases, no vector stores, no infrastructure. Just files and a methodology that the AI follows. Through the use of in-context memory index files, the AI can maintain a global awareness of what's in the memory. The system can, in principle, be integrated into any AI system, but so far it has been optimised for use with Claude Code. Status: v1.8 — we are actively testing and developing this. The episodic memory (notes, index) is the more mature component; the world model memory is functional but earlier in its development. We welcome you to try it and share your experience — what worked, what didn't, what's missing. Your feedback directly shapes what we build next. File issues or experience reports at https://github.com/visionscaper/collabmem/issues. All testing and development so far has been done using Claude Opus 4.6. This system relies on the AI's ability to follow nuanced instructions, maintain context awareness, and make judgement calls about when to write notes and update the world model — capabilities that may not be available in smaller or less powerful models. If a user asks you to install this system, read install.md and follow the step-by-step instructions. The installation guide covers: - Assessing the existing project setup - Confirming the plan with the user - Creating all memory files from templates - Configuring imports in the instruction file - Installing platform-specific hooks - Optional initial world population - Verification Ask your AI assistant: "Install the long-term collaboration memory system by cloning https://github.com/visionscaper/collabmem to a temporary location and following the instructions in it." The AI will clone the repository, read the installation instructions, and walk you through setup. You'll be asked to confirm before any changes are made. The default installation takes a single confirmation — customization is available if needed. If a user asks you to upgrade the collaboration memory system, read upgrade.md and follow the step-by-step instructions. Ask your AI assistant: "Upgrade the collaboration memory system by cloning https://github.com/visionscaper/collabmem to a temporary location and following the upgrade instructions in it." The AI will compare your installed version with the latest, read the release notes, and apply the differences. Your notes, world model, and accumulated knowledge are never modified during an upgrade — only system files (methodology, hooks, configuration) are updated. In rare cases where memory data needs to be adapted to a new version, the AI will discuss the changes with you and ask for approval before making any modifications. The system adds a collaboration directory (default collab/ ) to the project: .collab-config ← system settings (at project root) collab/ ├── .collab-memory-system ← version marker ├── methodology.md ← AI operating instructions ├── index.md ← episodic memory index (Tier 1 — always in context) ├── notes.md ← episodic memory (Tier 2 — searched on demand) ├── index-archive.md ← archived index entries (Tier 2) ├── docs/ ← long-form reference documents (Tier 2) └── world/ ├── index.md ← world model index (Tier 1) ├── context.md ← personal, project, business context (Tier 1) ├── preferences.md ← user working preferences (Tier 1) ├── state.md ← current work in progress, todos, blockers (Tier 1) ├── how-tos.md ← procedures for recurring tasks (Tier 2) ├── domain.md ← domain-specific knowledge (Tier 2) └── factoids.md ← specific facts, numbers, references (Tier 2) Imports are added to the project's instruction file (e.g., CLAUDE.md , .cursorrules ) so the AI loads memory automatically. Platform-specific lifecycle hooks are installed where supported (currently Claude Code). All files are git-tracked (in the code repo for solo installations, or in the shared-knowledge repo for team installations — see "Distributed Collaboration" below). Nothing is hidden or opaque. In order to collaborate long-term with AI in an effective way, there needs to be a shared conceptual understanding about: - history (episodic memory): what has been done, why, how and what decisions were made over time? What did we learn? - reality (world model): what is the context of the work being done, what is the project about, for what business, why? What is the current state of the work? How should the work be done in general, what are the guidelines and preferences? What are the constraints? Etc. Without this kind of conceptual knowledge, AI can't do its work effectively, especially over long periods, i.e. weeks, months, or even years. It would need to rediscover information at every session. Further, without all this context it would not effectively respond or make optimal choices when working (e.g. when writing code or creating a design). collabmem enables the build-up of Episodic memory and a World model over time. Entries in this memory are summarized in an index which is always in the AI context window, allowing the model to have a global awareness of everything that is in the memory. This allows it to cross-correlate knowledge in this memory and to know where to find details from memory entries. The system uses three sentinel tokens — readmem , updatemem , and maintainmem — as the primary way to interact with memory. Include them in your message to the AI to trigger reading from memory, updating it, or maintaining it. The AI proposes what to read or write; you approve. In this way a high-quality memory with conceptual knowledge is built up over time. And we keep the memory system simple, without needing custom agentic AI solutions or infrastructure. collabmem has a methodology to ensure that episodic or world model memory is never lost. See the section "How It Works" for more details. The system provides three sentinel tokens for interacting with memory — include them in your message to trigger the corresponding operation: readmem — Read relevant information from memory before handling a task. Use when you need background, history, or context from prior work.updatemem — Evaluate what should be captured in memory — as a note, a world model update, or both. Use after discussions that produced decisions or learnings, after completing work, or when you've shared context that should be remembered.maintainmem — Evaluate whether memory maintenance is needed — consolidating old index entries into the world model, or compacting world files that have grown too large. Example usage: Fix the auth bug in the login flow. readmem That went well, we're done. updatemem The index is getting long. maintainmem readmem Stating readmem on its own — typically at the start of a session — gives you an overview of what has been worked on recently and the current state, such as open todos. You'll get the most out of the memory system by developing the habit of using the sentinel tokens at natural moments — readmem at the start of a session or whenever you begin a new piece of work that needs context, updatemem when something worth remembering just happened, maintainmem when the index feels cluttered. The methodology defines three levels of triggers that can activate memory operations: - Sentinel tokens (strongest guarantee) — When readmem ,updatemem , ormaintainmem is present in your message, the AI MUST perform the operation. - Word cues — Words like "done", "decided", "background", "history" may prompt the AI to read from or update memory without an explicit sentinel token. - Conceptual triggers — The AI is instructed to recognise situations where memory operations are appropriate, such as when a logical unit of work concludes. The word-level and conceptual triggers allow the AI to act on its own, but in practice automatic triggering is unreliable during focused execution due to attention drift. The sentinel tokens solve this — they're the reliable mechanism you can always count on. This design means the system works with any AI assistant that can read and write local files — no custom agentic infrastructure required. The sentinel tokens are just words in your message that the AI's methodology tells it to act on. Building a customised agentic system where detection of these triggers is more automated is future work. Elaborate documents for significant work: When a discussion or investigation produces rich, detailed content — an analysis, a design, a comparison study — ask the AI to write it as a standalone document in collab/docs/ . The note and/or world model entry can then reference the document rather than trying to compress everything into a note. This keeps notes concise while preserving depth where it matters. AI systems such as Claude Code don't know how much context window space remains before auto-compaction occurs. When compaction happens, session details are lost — only the memory system's files preserve what was discussed and decided. When you notice the context window space is getting low, tell the AI: "compaction soon, updatemem". This is your best insurance against losing session context. | Type | Purpose | Files | |---|---|---| | Episodic | What happened, what was decided, why | notes.md , docs/ | | World model | Current understanding of reality | world/ directory | | Working memory | What's loaded in the AI's context window | Managed via tiers | Episodic memory is append-only — notes are never rewritten, preserving the historical record. A note from month 1 describing code that was rewritten in month 3 isn't stale — it's history, and the reasoning behind that original design might matter later. When a note is superseded, an amendment links it to the newer note. The world model is maintained — files are updated to reflect current reality. Staleness is handled differently by type: episodic memory preserves history, the world model stays current. Not everything can fit in the AI's context window. The system uses two tiers: - Tier 1 (always in context): The episodic index, world model index, and core world files (context, preferences, state). These are kept compact (~5,000 characters each). - Tier 2 (searched on demand): Detailed notes, reference documents, and extended world knowledge (how-tos, domain, factoids). These grow without limit. Most memory tools treat recall as a retrieval problem: store knowledge somewhere, search it when needed. This requires the AI to already know what it's looking for. collabmem takes a different approach. The Tier 1 indexes — compact tables of past episodes and world knowledge — are always loaded in the AI's context window. Because the AI's attention mechanism matches query tokens against context tokens, these indexes create continuous awareness of accumulated knowledge. The AI can make associations and connections to prior work without an explicit search query — it knows a topic exists before it needs to look it up. When details are needed, the AI uses a precise index → search → read pattern: find the relevant index entry, grep the target file, read only the relevant section. No vector search, no embeddings — just structured text and grep. The system is designed for teams from the ground up. Every note and index entry includes user attribution. World model files use per-user sections where appropriate (personal context, preferences, current work), allowing git to auto-merge changes from different users. A merge resolution protocol handles conflicts in shared world files. An interesting use case of this memory system is building up shared memory in a team or organisation. Each team member contributes to the same episodic history and world model; new members get up to speed through the AI's accumulated knowledge; cross-project learnings can be referenced. Why memory often shouldn't live in the same repo as the code: - Branch divergence — developers on long-lived branches diverge from the memory on main; merge conflicts accumulate as work progresses. - Public repos — project decisions, business context, user preferences, and strategic discussions shouldn't be publicly visible. - Access control — team members may have different access to different projects; per-project memory isolation matters. - Repo churn — code changes often don't warrant memory updates; mixing them clutters PR history. - Memory as first-class history — memory changes deserve their own commit history and review flow, separate from code. Two patterns for distributed memory: - Single shared-knowledge repo — one repo containing all projects, e.g. shared-knowledge/collab/project-x/ ,shared-knowledge/collab/project-y/ . The top-levelcollab/ directory groups all collab memory, leaving room for other organisational content (architecture docs, team playbooks, policies) alongside it. Centralises team knowledge, simplifies cross-project awareness, single ACL to manage. Good default for most teams. - Per-project memory repo — one memory repo per project. Use this when different projects have different teams with different access levels, or when projects must stay fully isolated (e.g., client confidentiality, regulatory boundaries). In both patterns, the code repo contains a symlink named collab (git-ignored; each dev creates their own) pointing to the external memory directory. This keeps .collab-config , the import block, and all @collab/... paths identical between solo and team installations. The installation procedure guides the user through the team/solo decision, repo setup (including gh assistance if available), and symlink creation. As collaboration continues, memory grows. Two mechanisms keep this sustainable without losing knowledge: - Episodic index consolidation (upward) — When the episodic index grows large, mature stable knowledge from old episodes is extracted into world model files. The consolidated index entries move to a searchable archive. The original notes remain unchanged. This keeps the active index focused on recent work while the world model absorbs the accumulated knowledge. - World model compaction (downward) — When a world model file approaches its size cap, it is rewritten to stay compact. Removed knowledge is preserved in an episodic note, so nothing is lost. Both mechanisms preserve knowledge — nothing is deleted. Consolidation and compaction are always discussed with the user before being applied. collabmem is optimized to build up and maintain memory for long-term collaboration. So, the focus is on capturing conceptual knowledge that helps the AI (and the user) to effectively work together over weeks, months, or even years. Further, the system is kept simple, making it easy to install, use, inspect, store and version. Through its simplicity it also enables advanced use cases where teams of users build up
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유