Agentis – 12개 LLM 제공업체의 다중 에이전트 AI 플랫폼, 3D로 시청

hackernews | | 📦 오픈소스
#3d ai #agentis #ai 딜 #ai 플랫폼 #anthropic #claude #command r #gemini #gpt-4 #llama #llm #mistral #openai #다중 에이전트
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Agentis는 하나의 모델 대신 12개의 주요 LLM 제공업체를 동시에 활용해 특화된 AI 에이전트 팀을 구성하고, 그 작업 과정을 실시간 3D 캔버스로 시각화하는 오픈 소스 멀티 에이전트 플랫폼입니다. 도커나 별도의 인프라 없이 브라우저에서 바로 실행 가능하며, 연구원, 코더, 분석가 등 각 에이전트가 고유한 역할을 수행한 뒤 결과를 종합하여 최종 답변을 제공합니다. 또한 제공업체 간 자동 장애 조치, 기능별 스마트 모델 할당, 실시간 토큰 비용 추적 등을 지원하여 대규모 작업의 효율성을 극대화합니다.

본문

Most AI tools give you one model and one answer. Agentis gives you a team. Deploy fleets of specialized agents — researchers, coders, analysts, writers, and more — across 12 LLM providers simultaneously. Each agent works its angle, shares findings, and hands off to the next. Watch it unfold live with hexagonal agent nodes, curved edges, real-time thought bubbles, and per-agent token tracking. Open source, provider-agnostic, and built for tasks that are too big for a single prompt. Agentis is a browser-native multi-agent AI platform. You describe a task — Agentis spawns a coordinated team of specialized AI agents across multiple LLM providers, visualises their live thinking on an animated canvas, and synthesizes everything into one clean answer. No backend. No Docker. No infra. Clone, npm install , go. You → "Research the competitive landscape for AI coding tools" Agentis orchestrator plans: ├── Researcher (claude-sonnet-4-6 · Anthropic) → market sizing, key players ├── Analyst (gemini-2.5-flash · Google) → feature comparison matrix ├── Coder (gpt-4.1-mini · OpenAI) → API/SDK landscape └── Reviewer (llama-3.3-70b · Groq) → fact-check & critique ↓ ~45 seconds One comprehensive, synthesized report. - Live animated canvas — agents spawn as orbiting nodes, spin while working, send visible messages to each other - 12 LLM providers simultaneously — mix Anthropic, OpenAI, Google, Groq, Mistral, DeepSeek, OpenRouter, Cohere, xAI, Together, Ollama, LM Studio in one run - Exact model names on every node — claude-sonnet-4-6 ,gpt-4.1-mini ,gemini-2.5-flash - Smart tier selection — orchestrator assigns simple → fast model ,complex → frontier model per task - Auto-failover — when a provider goes down mid-task, agents switch to the next available one automatically, zero data loss - Persistent universe — follow-up questions recall relevant old agents and add new ones; knowledge compounds across turns - Hexagonal agent nodes — distinct visual identity per agent with role-based color coding - Curved bezier edges — animated particle flow along curved connections between agents and tools - Live thought bubbles — active agents display their last output snippet in a glassmorphism overlay in real time - Token progress bars — visual indicator under each agent node showing output token usage - Tool call diamonds — web search, LLM calls, and browser actions rendered as animated diamond nodes - Horizontal timeline — shows every agent's start/end as a color-coded bar - Tool call markers — overlaid on each agent's track showing exactly when web searches, LLM calls, and browser actions fired - Live duration counter — active agents show elapsed time in seconds - Per-agent token counts — real input/output token data captured from Anthropic's SSE stream - Cost estimation — per-agent and session-total cost calculated from live token counts and model pricing - Header metrics bar — total tokens and estimated cost ( ~$X.XXX ) displayed live in the canvas header - Orchestrator plans agent topology, delegates subtasks, then merges all outputs - Final answer is a clean direct response — no meta-commentary about which agent said what - Export as Markdown, plain text, or save key insights to persistent memory - Providers — configure all 12 providers with live connection testing + model recommendations per complexity tier - Models — browse all available models with pricing, context window, and availability status - Memory — IndexedDB-backed persistent memory with importance scoring, decay, export/import - Migration — one-click OpenClaw → OpenFang migration (auto-detect, YAML→TOML conversion, tool remapping) | Provider | Best Models | Strength | |---|---|---| | Anthropic | Claude Opus 4.6, Sonnet 4.6, Haiku 4.5 | Reasoning, writing, long context | | OpenAI | GPT-4.1, GPT-4.1 Mini, o4-mini | Code, structured output, tools | | Gemini 2.5 Pro, 2.5 Flash, 2.0 Flash | 1M context, multimodal | | | Groq | Llama 3.3 70B, 3.1 8B, Mixtral | Fastest inference on the planet | | Mistral | Large 2, Small 3.1, Codestral | European data, code generation | | DeepSeek | V3, R1 Reasoner | Math, logic, best cost/quality ratio | | OpenRouter | 200+ models | Single API for everything | | Cohere | Command R+, Command R | Enterprise RAG, retrieval | | xAI | Grok 3, Grok 3 Mini | Real-time web knowledge | | Together AI | Llama 405B, Qwen 2.5 72B | Best open-source models | | Ollama | Any model you pull | Local, private, free | | LM Studio | Any GGUF model | Local GUI + OpenAI-compatible API | - Node.js 18+ - At least one LLM provider API key (or Ollama running locally — free) git clone https://github.com/Dhwanil25/Agentis.git cd Agentis npm install npm run dev Open http://localhost:5173, paste any API key, and launch your first agent team. # .env.local VITE_ANTHROPIC_API_KEY=sk-ant-... | Feature | What's needed | |---|---| | Web search | Free Tavily API key | | Browser agent | npm install -g pinchtab && pinchtab server | | Local models | Ollama or LM Studio running locally |

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →