Spacebot: LLM 프로세스에 전담 역할이 있는 에이전트 AI 시스템, OpenClaw alt
hackernews
|
|
🔬 연구
#agentic ai
#ai 시스템
#llm
#openclaw
#review
#팀 협업
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
Spacebot은 대규모 팀과 커뮤니티를 위해 채널, 브랜치, 워커 등 각 LLM 프로세스에 전담 역할을 부여하는 에이전트 기반 컴퓨팅 아키텍처입니다. 8가지 기억 유형을 연결하는 메모리 그래프와 하이브리드 검색 기능을 갖춘 코텍스(Cortex)가 60분마다 요약 브리핑을 생성하여 대화의 맥락을 유지합니다. 또한 Discord, Slack, Telegram 등 다양한 플랫폼을 지원하며, 자연어 일정 관리와 작업 복잡도에 따른 스마트 모델 라우팅 기능을 제공합니다. 텍스트 파일을 드롭하기만 하면 LLM이 지식을 분류해 그래프로 자동 구성하며, OpenClaw 마이그레이션도 간편하게 지원합니다.
본문
Spacebot Thinks, executes, and responds — concurrently, not sequentially. Built for large teams and communities. dev-general design-review support-tickets infrastructure product-ideas daily-standup An opinionated architecture for agentic computing Workers work. can you research what changed in the Stripe API and update our webhook handler? On it — let me pull context and get a worker on this. also, are we still on for the deploy at 3? Yes — staging is green. I'll run the final checks before 3. cool. make sure we handle the new payment_intent.requires_action event Already on it — the worker is scraping the latest changelog now. Oscar prefers Stripe v2 webhook format. Last integration used checkout sessions 3 weeks ago. Team policy requires signature verification on all endpoints. did we ever set up the retry logic for failed webhooks? Not yet, I'll do that now. Stripe API v2024-12 changelog scraped. 3 new event types identified: payment_intent.requires_action, invoice.overdue, charge.dispute.funds_withdrawn. Webhook handler updated. Added signature verification, exponential backoff retry logic, and handlers for all 3 new event types. Done. Scraped the Stripe API v2024-12 changelog — 3 new event types found (payment_intent.requires_action, invoice.overdue, charge.dispute.funds_withdrawn). Webhook handler updated with signature verification, exponential backoff retry logic, and handlers for all 3 events. Branches think. Nothing blocks. Channel The user-facing ambassador. One per conversation. Has soul, identity, personality. Talks to the user. Delegates everything else. Branch A fork of the channel's context that goes off to think. Has the channel's full history. Returns only the conclusion. Worker Does real work. Gets a task and the right tools. No personality, no conversation context — just focused execution. The perfect assistant Out of the box, with everything you need to create a fleet of capable AI employees. Memory Graph Eight memory types (Fact, Preference, Decision, Identity, Event, Observation, Goal, Todo) with graph edges connecting them. Hybrid recall via vector + full-text search. The cortex generates a periodic briefing instead of dumping raw results into context. Multi-Platform Native adapters for Discord, Slack, and Telegram. Message coalescing batches rapid-fire bursts. Threading, reactions, file attachments, typing indicators, and per-channel permissions. Task Execution Shell, file, exec, browser, and web search tools. Workers are pluggable — built-in workers handle most tasks, or spawn OpenCode for deep coding sessions with LSP awareness. Both support interactive follow-ups. Smart Model Routing Process-type defaults (channels get the best conversational model, workers get cheap and fast). Task-type overrides. Prompt complexity scoring routes simple requests to cheaper models automatically. Fallback chains handle rate limits. Scheduling Cron jobs with natural language scheduling. "Check my inbox every 30 minutes" becomes a job with a delivery target. Active hours support with midnight wrapping. Circuit breaker auto-disables after 3 consecutive failures. Multi-Agent Run multiple agents on one instance. Each with its own workspace, databases, identity, and cortex. A friendly community bot on Discord, a no-nonsense dev assistant on Slack, a research agent for background tasks. One binary, one deploy. It already knows. The Cortex sees across every conversation, every memory, every running process. It synthesizes what the agent knows into a pre-computed briefing that every conversation inherits — so nothing starts cold. James is the primary user. Prefers concise communication, dislikes over-engineering. Memory Bulletin Every 60 minutes, the Cortex queries the memory graph across 8 dimensions and synthesizes a concise briefing. Every conversation reads it on every turn — lock-free, zero-copy. Association Loop Continuously scans memories for embedding similarity and builds graph edges between related knowledge. Facts link to decisions. Events link to goals. The graph grows smarter on its own. Cortex Chat A persistent admin line directly to the Cortex. Full tool access — memory, shell, browser, web search, workers. One conversation per agent, accessible from anywhere. Drop files. Get memories. Dump text files into the ingest folder — notes, docs, logs, markdown, whatever. Spacebot chunks them, runs each chunk through an LLM with memory tools, and produces typed, graph-connected memories automatically. No manual tagging. No reformatting. The LLM reads each chunk, classifies the content, recalls related memories to avoid duplicates, and saves distilled knowledge with importance scores and graph associations. Migrating from OpenClaw? Drop your MEMORY.md and daily logs into the ingest folder — Spacebot extracts structured memories and wires them into the graph. Skills go in the skills folder and are compatible out of the box. What they're saying @richiemcilroy Founder @Cap Using spacebot.sh from @jamiepi
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유