Engram – 클로드 코드용 중요도 기반 기억(중요한 요소를 포착함)

hackernews | | 📦 오픈소스
#ai 도구 #ai 모델 #claude #snarc #기억 관리 #중요도 기반 #클로드 코드
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

'엔그램'으로 불리던 개발 도구는 이제 SNARC라는 새로운 이름으로 알려졌습니다. SNARC는 '놀라움, 신규성, 흥분, 보상, 충돌'이라는 5가지 주목도 측정 기준으로 개발자의 작업 패턴과 오류, 수정 내용을 기억하고, 기억할 만한 내용만 선택해 보관합니다. 세션이 끝나면 저장된 관찰치에서 패턴을 추출하여 '꿈 주기' 과정을 거치며, 이 과정에서 만들어진 자동 추천된 사실은 기본적으로 격리되어 검토 없이는 영구 메모리로 등재되지 않습니다.

본문

Salience-gated memory for Claude Code. Captures what matters, forgets what doesn't, consolidates patterns while sleeping. Formerly "engram" — renamed to SNARC to avoid collision with an existing project. SNARC is the mechanism itself: Surprise, Novelty, Arousal, Reward, Conflict. Conversation capture — the biggest change. Previous versions only observed tool calls: edits, commands, searches. We discovered that after hundreds of sessions, SNARC's memory was "Bash → Bash → Bash (51×)" and "focused work on file.py" — mechanics without meaning. The actual value of a session — the insights, the reframes, the decisions, the "wait, damping should not be a thing" moments — lived in the conversation and vanished at compaction. The new PreCompact hook reads the full conversation transcript before it's compressed and stores semantically salient turns. The mind, not just the hands. Deep dream and auto-promote on by default. Both were opt-in, both are now on. We're in R&D — the goal is observing what SNARC learns, not gatekeeping it. Deep dream runs at every session end. Identity facts auto-promote to Tier 3 so they influence future sessions immediately. Confidence decay corrects mistakes over time. Disable with snarc config deep_dream 0 or snarc config auto_promote_identity 0 if this is too aggressive for your use case. Per-project settings via DB, not env vars. Removed SNARC_DEEP_DREAM and SNARC_AUTO_PROMOTE environment variables. All settings are now per-project via snarc config . Each launch directory is isolated — what you configure for one project doesn't leak to another. What we've learned so far: Salience scoring on tool calls captures workflow mechanics but not intent. The heuristic extractors (tool sequences, concept clusters) are useful for accounting but shallow for memory. The real value is in conversation turns scored on semantic content — insight language, domain concepts, decisions, analogies. Deep dream operating on conversation observations produces qualitatively different patterns than deep dream on tool logs. This is the direction. SNARC captures two things: what you do (tool calls) and what you discuss (conversation). Tool-use hooks observe every edit, command, and search. Before context compaction, the PreCompact hook reads the full conversation transcript and extracts semantically salient turns — insights, decisions, reframes, connections. Both streams are scored on 5 salience dimensions and stored if above threshold. At session end, a "dream cycle" extracts patterns from stored observations — either mechanically (heuristic) or semantically (LLM-powered deep dream). Over time, SNARC builds a structured memory of how you work and what you discuss — not just which tools you used, but why. Context injection is automatic. SNARC injects relevant memories at session start, after each prompt (if related memories exist), and after context compaction. You don't need to query it — it surfaces what's relevant without being asked. Most memory systems capture everything and retrieve by search. SNARC captures selectively using SNARC salience scoring — the same attention mechanism used by the SAGE cognition kernel: | Dimension | What it measures | How | |---|---|---| | Surprise | How unexpected was this tool transition? | Tool transition frequency map | | Novelty | Are these files/symbols/concepts new? | Seen-before set (SQLite) | | Arousal | Errors, warnings, state changes? | Keyword pattern matching | | Reward | Did this advance the task? | Success/build/test signals | | Conflict | Does this contradict recent observations? | Recent result comparison | Observations scoring below the salience threshold stay in the circular buffer briefly and then evict. High-salience observations persist. This mirrors biological memory: you don't remember every step, but you remember the one where you tripped. | Tier | Name | Contents | Retention | Storage | |---|---|---|---|---| | 0 | Buffer | Last 50 observations, raw | Session only (FIFO) | In-memory | | 1 | Observations | Salience-gated experiences (observed) | Decays after 7 days | SQLite | | 2 | Patterns | Consolidated workflows, error-fix chains (inferred) | Decays 0.05/day, pruned below 0.1 | SQLite | | 3 | Identity | Persistent project facts (human-confirmed) | Permanent | SQLite | Injection is epistemically labeled: Tier 1 = "observed (directly recorded)", Tier 2 = "inferred (heuristic — may not be accurate)", Tier 3 = "auto-extracted, verify if unsure". Injection is conservative — biased toward omission. Wrong memory is more damaging than missing memory. Two modes of consolidation: - Tool sequences: Recurring workflows (e.g., Edit → Bash(test) → Edit = TDD loop) - Error-fix chains: Error followed by fix on the same file within 5 observations - Concept clusters: Multiple observations grouped around the same files At session end, SNARC sends observations to Claude via claude --print for semantic pattern extraction: - Workflows: Recurring approaches (not just tool sequen

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →