Prism v11.0 – $O(1)$ Zero-Search Memory for AI Agents Using HRR and Act-R
hackernews
|
|
📦 오픈소스
#ai api
#gpt-4
#에러 복구
#에이전트 진화
#자율 학습
#ai
#ai 딜
#claude
#gemini
#mcp
#openai
#메모리
#에이전트
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
Prism v11.0은 인간의 뇌 작동 원리를 모방하여 경험에서 원칙을 형성하고 정보 부족을 자각하는 등 단순 검색을 넘어선 인지 능력을 AI 에이전트에 부여합니다. 이번 업데이트는 원장 압축, 작업 라우팅 등 인지 파이프라인 전체를 기기 내에서 100% 처리하거나 보안된 임상 데이터베이스를 활용하도록 개선했습니다. 또한 Claude Desktop과 Gemini 등 다양한 MCP 클라이언트와 호환되며, 의료 데이터 보호 규정을 준수하는 로컬 LLM을 기반으로 안정성을 강화했습니다.
본문
Your AI agent forgets everything between sessions. Prism fixes that — then teaches it to think. Prism v11.0 is a true Cognitive Architecture inspired by human brain mechanics. Beyond flat vector search, your agent now forms principles from experience, follows causal trains of thought, and possesses the self-awareness to know when it lacks information. Your agents don't just remember; they learn. With v11.0, the entire cognitive pipeline — including ledger compaction, task routing, semantic search, and the new Deep Research Intelligence — runs 100% on-device or via secure clinical discovery (PubMed/ERIC), backed by prism-coder:7b , a HIPAA-hardened local LLM. No API keys for core features. No data leaves your machine. npx -y prism-mcp-server Works with Claude Desktop · Claude Code · Cursor · Windsurf · Cline · Gemini · Antigravity — any MCP client. - Why Prism? - Quick Start - The Magic Moment - Setup Guides - Universal Import: Bring Your History - What Makes Prism Different - Cognitive Architecture (v7.8) - Data Privacy & Egress - Use Cases - What's New - How Prism Compares - CLI Reference - Tool Reference - Environment Variables - Architecture - Scientific Foundation - Milestones & Roadmap - Troubleshooting FAQ Prism v11.0 transforms your AI agent from a "Coder" into a "Clinical Scientist." It features a Tavily-Enhanced Multi-Provider Discovery Pipeline that grounds Gemini 2.5 Flash's thinking in real-world empirical data. | Feature | Standard AI Memory (Mem0/Zep) | Prism v11.0 (Elite Architecture) | |---|---|---| | Search Complexity | | | | Discovery Logic | General Web Search (Snippets) | Parallel Academic Discovery (PubMed, ERIC, S2) | | Reasoning Model | Flat List (Simple Similarity) | ACT-R Spreading Activation (Causal Graph) | | Privacy Mode | Cloud-First (SaaS) | Local-First (HIPAA-Hardened / Air-Gapped) | | Intelligence Floor | Generic GPT-4 Advice | Data-Driven Clinical Evidence (62% CI Warnings) | - Tavily AI (Elite): Primary discovery engine for AI-native deep crawling and PDF/Abstract extraction. - PubMed (NCBI) (Clinical): The world's largest biomedical database for clinical citations. - ERIC (Education Research) (Behavioral): The definitive database for ABA and pediatric interventions. - Semantic Scholar (Academic): AI-powered research tool providing "TLDR" summaries of 200M+ papers. - DuckDuckGo Lite (Fallback): Privacy-focused web discovery for general context. Synalux is a high-compliance, local-first Practice Management System for ABA and Pediatrics. It is the flagship implementation of the Prism v11.0 engine, utilizing Zero-Search Retrieval and Parallel Academic Discovery to provide clinicians with real-time, evidence-based reasoning. See Live Samples (Simplified Terms) - Without Deep Research: "I recommend using sensory toys and maintaining a calm environment to help the child focus during tasks." - With Deep Research (v11.0): "Recent clinical studies indicate that high-frequency sensory input can actually decrease focus in 40% of pediatric cases. I recommend a low-frequency, high-pressure 'weighted' approach which showed a 3.5x improvement in sustained attention during clinical trials." - Without Deep Research: "Extinction is a common way to stop a behavior. You should also reinforce good behaviors at the same time." - With Deep Research (v11.0): "Research shows that using extinction alone leads to an 'extinction burst' (a temporary spike in the bad behavior) in 62% of cases. However, combining it with an alternative reinforcement strategy (DRA) reduces this risk to under 20%." Every time you start a new conversation with an AI coding assistant, it starts from scratch. You re-explain your architecture, re-describe your decisions, re-list your TODOs. Hours of context — gone. Prism gives your agent a brain that persists — and then teaches it to reason. Save what matters at the end of each session. Load it back instantly on the next one. But Prism goes far beyond storage: it consolidates raw experience into lasting principles, traverses causal chains to surface root causes, and knows when to say "I don't know." 📌 Terminology: Throughout this doc, "Prism" refers to the MCP server and cognitive memory engine. "Mind Palace" refers to the visual dashboard UI at localhost:3000 — your window into the agent's brain. They work together; the dashboard is optional. Prism has three pillars: - 🧠 Cognitive Memory ( $O(1)$ Zero-Search) — Prism uses Holographic Reduced Representations (HRR) to eliminate "searching" entirely. Memories are unbound mathematically from a superposition vector in constant time ($O(1)$), regardless of library size. Re-ranking is powered by the ACT-R model, mimicking biological recency and frequency. - 🔗 Multi-Hop Causal Reasoning — Prism doesn't just find "similar" things. Spreading activation traverses the causal graph and brings back context connected to your current problem through logical "trains of thought." - 🏭 Autonomous Execution (Dark Factory) — When you're ready, Prism can run coding tasks end-to-end with a fail-closed pipeline where an adversarial evaluator catches bugs the generator missed — before you ever see the PR. (See Dark Factory.) - Node.js v18+ (v20 LTS recommended; v23.x has known npx quirk) - Any MCP-compatible client (Claude Desktop, Cursor, Windsurf, Cline, etc.) - No API keys required for core features (see Capability Matrix) Add to your MCP client config (claude_desktop_config.json , .cursor/mcp.json , etc.): { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"] } } } ⚠️ Windows / Restricted Shells: If your MCP client complains thatnpx is not found, use the absolute path to your node binary (e.g.C:\Program Files\nodejs\npx.cmd ). That's it. Restart your client. All tools are available. The Mind Palace Dashboard (the visual UI for your agent's brain) starts automatically at http://localhost:3000 . You don't need to keep a tab open — the dashboard runs in the background and the MCP tools work with or without it. 🔮 Pro Tip: Once installed, open http://localhost:3000 in your browser to view the Mind Palace Dashboard — a beautiful, real-time UI of your agent's brain. Explore the Knowledge Graph, Intent Health gauges, and Session Ledger. 🔄 Updating Prism: npx -y caches the package locally. To force an update to the latest version, restart your MCP client —npx -y will fetch the newest release automatically. If you're stuck on a stale version, runnpx clear-npx-cache (ornpm cache clean --force ) before restarting. Port 3000 already in use? (Next.js / Vite / etc.) Add PRISM_DASHBOARD_PORT to your MCP config env block: { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"], "env": { "PRISM_DASHBOARD_PORT": "3001" } } } } Then open http://localhost:3001 instead. | Feature | Local (Offline) | Cloud (API Key) | |---|---|---| | Session memory & handoffs | ✅ | ✅ | | Keyword search (FTS5) | ✅ | ✅ | | Time travel & versioning | ✅ | ✅ | | Mind Palace Dashboard | ✅ | ✅ | | GDPR export (JSON/Markdown/Vault) | ✅ | ✅ | | Semantic vector search | ✅ (embedding_provider=local ) | ✅ (gemini, openai, or voyage) | | Ledger compaction | ✅ prism-coder:7b via Ollama | ✅ Text provider key | | Task routing (LLM tiebreaker) | ✅ prism-coder:7b via Ollama | N/A (heuristic-only) | | Morning Briefings | ❌ | ✅ Text provider key | | Web Scholar research | ❌ | ✅ BRAVE_API_KEY + FIRECRAWL_API_KEY (or TAVILY_API_KEY ) | | VLM image captioning | ❌ | ✅ Provider key | | Autonomous Pipelines (Dark Factory) | ❌ | ✅ Text provider key | 🔑 The core Mind Palace works 100% offline with zero API keys — including semantic vector search with embedding_provider=local . Cloud keys unlock text generation features (Briefings, compaction, pipelines). See Environment Variables. 💰 API Cost Note: With embedding_provider=local , semantic search is fully free and offline. Cloud providers (GOOGLE_API_KEY for Gemini,VOYAGE_API_KEY ,OPENAI_API_KEY ) have generous free tiers.BRAVE_API_KEY offers 2,000 free searches/month.FIRECRAWL_API_KEY has a free plan with 500 credits. For typical solo development, expect $0/month on the free tiers. Session 1 (Monday evening): You: "Analyze this auth architecture and plan the OAuth migration." Agent: *deep analysis, decisions, TODO list* Agent: session_save_ledger → session_save_handoff ✅ Session 2 (Tuesday morning — new conversation, new context window): Agent: session_load_context → "Welcome back! Yesterday we decided to use PKCE flow with refresh tokens. 3 TODOs remain: migrate the user table, update the middleware, and write integration tests." You: "Pick up where we left off." Your agent remembers everything. No re-uploading files. No re-explaining decisions. Claude Desktop Add to claude_desktop_config.json : { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"] } } } Cursor Add to .cursor/mcp.json (project) or ~/.cursor/mcp.json (global): { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"] } } } Windsurf Add to ~/.codeium/windsurf/mcp_config.json : { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"] } } } VS Code + Continue / Cline Add to your Continue config.json or Cline MCP settings: { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"], "env": { "PRISM_STORAGE": "local", "BRAVE_API_KEY": "your-brave-api-key" } } } } Claude Code — Lifecycle Autoload (.clauderules) Claude Code naturally picks up MCP tools by adding them to your workspace .clauderules . Simply add: Always start the conversation by calling `mcp__prism-mcp__session_load_context(project='my-project', level='deep')`. When wrapping up, always call `mcp__prism-mcp__session_save_ledger` and `mcp__prism-mcp__session_save_handoff`. Format Note: Claude automatically wraps MCP tools with double underscores ( mcp__prism-mcp__... ), while most other clients use single underscores (mcp_prism-mcp_... ). Prism's backend natively handles both formats seamlessly. CLI Alternative: If MCP tools aren't available or you're scripting around Claude Code: # Load context before a session prism load my-project --level deep # Machine-readable JSON for parsing in scripts prism load my-project --level deep --json Gemini / Antigravity — Prompt Auto-Load See the Gemini Setup Guide for the proven three-layer prompt architecture to ensure reliable session auto-loading. Antigravity doesn't expose MCP tools to the model. Use the prism load CLI as a fallback: # From a shell or run_command tool prism load my-project --level standard --json # Or via the wrapper script bash ~/.gemini/antigravity/scratch/prism_session_loader.sh my-project The CLI uses the same storage layer as the MCP tool (SQLite or Supabase). ⚠️ CRITICAL (v9.2.2): Split-Brain Prevention If your MCP server is configured withPRISM_STORAGE=local but Supabase credentials are also set, the CLI may read from the wrong backend (Supabase) while the server writes to SQLite. This causes stale TODOs and divergent state. Always pass--storage local explicitly when using the CLI in a local-mode environment:prism load my-project --storage local --json The prism_session_loader.sh wrapper handles this automatically since v9.2.2. Bash / CI/CD / Scripts Use the prism load CLI to access session context from any shell environment: # Quick check — human-readable prism load my-project # Parse JSON in scripts CONTEXT=$(prism load my-project --level quick --json) SUMMARY=$(echo "$CONTEXT" | jq -r '.handoff[0].last_summary') VERSION=$(echo "$CONTEXT" | jq -r '.handoff[0].version') echo "Project at v$VERSION: $SUMMARY" # Explicit storage backend (v9.2.2 — prevents split-brain) prism load my-project --storage local --json prism load my-project --storage supabase --json # Role-scoped loading prism load my-project --role qa --json # Use in CI/CD to verify context exists before deploying if ! prism load my-project --level quick --json | jq -e '.handoff[0].version' > /dev/null 2>&1; then echo "No Prism context found — skipping context-aware deploy" fi 📦 Install: npm install -g prism-mcp-server makes theprism CLI available globally. For local builds:node /path/to/prism/dist/cli.js load . Supabase Cloud Sync To sync memory across machines or teams: { "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"], "env": { "PRISM_STORAGE": "supabase", "SUPABASE_URL": "https://your-project.supabase.co", "SUPABASE_KEY": "your-supabase-anon-or-service-key" } } } } Prism auto-applies its schema on first connect — no manual step required. If you need to apply or re-apply migrations manually (e.g. for a fresh project or after a version bump), run the SQL files in supabase/migrations/ in numbered order via the Supabase SQL Editor or the CLI: # Via CLI (requires supabase CLI + project linked) supabase db push # Or apply a single migration via the Supabase dashboard SQL Editor # Paste the contents of supabase/migrations/0NN_*.sql and click Run Key migrations: 020_* — Core schema (ledger, handoff, FTS, TTL, CRDT)033_memory_links.sql — Associative Memory Graph (MemoryLinks) — required forsession_backfill_links Anon key vs. service role key: The anon key works for personal use (Supabase RLS policies apply). Use the service role key for team deployments where multiple users share the same Supabase project — it bypasses RLS and allows Prism to manage all rows regardless of auth context. Never expose the service role key client-side. Clone & Build (Full Control) git clone https://github.com/dcostenco/prism-mcp.git cd prism-mcp && npm install && npm run build Then add to your MCP config: { "mcpServers": { "prism-mcp": { "command": "node", "args": ["/path/to/prism-mcp/dist/server.js"], "env": { "BRAVE_API_KEY": "your-key" } } } } Cloud Deployment (Render) Prism can be deployed natively to cloud platforms like Render so your agent's memory is always online and accessible across different machines or teams. - Fork this repository. - In the Render Dashboard, create a new Web Service pointing to your repository. - In the setup wizard, select Docker as the Runtime. - Set the Dockerfile path to Dockerfile.smithery . - Connect your local MCP client to your new cloud endpoint using the sse transport: { "mcpServers": { "prism-mcp-cloud": { "command": "npx", "args": ["-y", "supergateway", "--url", "https://your-prism-app.onrender.com/sse"] } } } Note: The Dockerfile.smithery uses an optimized multi-stage build that compiles Typescript safely in a development environment before booting the server in a stripped-down production image. No NPM publishing required! ❌ Don't use npm install -g : Hardcoding the binary path (e.g./opt/homebrew/Cellar/node/23.x/bin/prism-mcp-server ) is tied to a specific Node.js version — when Node updates, the path silently breaks.✅ Always use npx instead:{ "mcpServers": { "prism-mcp": { "command": "npx", "args": ["-y", "prism-mcp-server"] } } } npx resolves the correct binary automatically, always fetches the
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유