연구, 코딩, 생성하는 오픈 소스 장거리 SuperAgent 하네스
hackernews
|
|
📦 오픈소스
#deerflow
#github
#gpt-4
#superagent
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
오픈소스 슈퍼 에이전트 하니스인 DeerFlow 2.0이 출시되어 깃허브 트렌딩 1위를 차지했습니다. 이 버전은 코드를 완전히 새로 작성하여 하위 에이전트, 메모리, 샌드박스를 조율하여 연구, 코딩, 창작 등 복합적인 작업을 수행할 수 있습니다. 사용자는 인터랙티브 마법사를 통해 2분 만에 LLM 공급자 및 실행 환경을 구성할 수 있으며, Docker나 로컬 환경에 배포하는 것이 가능합니다.
본문
English | 中文 | 日本語 | Français | Русский On February 28th, 2026, DeerFlow claimed the 🏆 #1 spot on GitHub Trending following the launch of version 2. Thanks a million to our incredible community — you made this happen! 💪🔥 DeerFlow (Deep Exploration and Efficient Research Flow) is an open-source super agent harness that orchestrates sub-agents, memory, and sandboxes to do almost anything — powered by extensible skills. deer-flow-720p.mp4 Note DeerFlow 2.0 is a ground-up rewrite. It shares no code with v1. If you're looking for the original Deep Research framework, it's maintained on the 1.x branch — contributions there are still welcome. Active development has moved to 2.0. Learn more and see real demos on our official website. - We strongly recommend using Doubao-Seed-2.0-Code, DeepSeek v3.2 and Kimi 2.5 to run DeerFlow - Learn more - 中国大陆地区的开发者请点击这里 DeerFlow has newly integrated the intelligent search and crawling toolset independently developed by BytePlus--InfoQuest (supports free online experience) - 🦌 DeerFlow - 2.0 If you use Claude Code, Codex, Cursor, Windsurf, or another coding agent, you can hand it the setup instructions in one sentence: Help me clone DeerFlow if needed, then bootstrap it for local development by following https://raw.githubusercontent.com/bytedance/deer-flow/main/Install.md That prompt is intended for coding agents. It tells the agent to clone the repo if needed, choose Docker when available, and stop with the exact next command plus any missing config the user still needs to provide. - Clone the DeerFlow repository git clone https://github.com/bytedance/deer-flow.git cd deer-flow - Run the setup wizard From the project root directory ( deer-flow/ ), run:make setup This launches an interactive wizard that guides you through choosing an LLM provider, optional web search, and execution/safety preferences such as sandbox mode, bash access, and file-write tools. It generates a minimal config.yaml and writes your keys to.env . Takes about 2 minutes.The wizard also lets you configure an optional web search provider, or skip it for now. Run make doctor at any time to verify your setup and get actionable fix hints.Advanced / manual configuration: If you prefer to edit config.yaml directly, runmake config instead to copy the full template. Seeconfig.example.yaml for the complete reference including CLI-backed providers (Codex CLI, Claude Code OAuth), OpenRouter, Responses API, and more.Manual model configuration examples models: - name: gpt-4o display_name: GPT-4o use: langchain_openai:ChatOpenAI model: gpt-4o api_key: $OPENAI_API_KEY - name: openrouter-gemini-2.5-flash display_name: Gemini 2.5 Flash (OpenRouter) use: langchain_openai:ChatOpenAI model: google/gemini-2.5-flash-preview api_key: $OPENROUTER_API_KEY base_url: https://openrouter.ai/api/v1 - name: gpt-5-responses display_name: GPT-5 (Responses API) use: langchain_openai:ChatOpenAI model: gpt-5 api_key: $OPENAI_API_KEY use_responses_api: true output_version: responses/v1 - name: qwen3-32b-vllm display_name: Qwen3 32B (vLLM) use: deerflow.models.vllm_provider:VllmChatModel model: Qwen/Qwen3-32B api_key: $VLLM_API_KEY base_url: http://localhost:8000/v1 supports_thinking: true when_thinking_enabled: extra_body: chat_template_kwargs: enable_thinking: true OpenRouter and similar OpenAI-compatible gateways should be configured with langchain_openai:ChatOpenAI plusbase_url . If you prefer a provider-specific environment variable name, pointapi_key at that variable explicitly (for exampleapi_key: $OPENROUTER_API_KEY ).To route OpenAI models through /v1/responses , keep usinglangchain_openai:ChatOpenAI and setuse_responses_api: true withoutput_version: responses/v1 .For vLLM 0.19.0, use deerflow.models.vllm_provider:VllmChatModel . For Qwen-style reasoning models, DeerFlow toggles reasoning withextra_body.chat_template_kwargs.enable_thinking and preserves vLLM's non-standardreasoning field across multi-turn tool-call conversations. Legacythinking configs are normalized automatically for backward compatibility. Reasoning models may also require the server to be started with--reasoning-parser ... . If your local vLLM deployment accepts any non-empty API key, you can still setVLLM_API_KEY to a placeholder value.CLI-backed provider examples: models: - name: gpt-5.4 display_name: GPT-5.4 (Codex CLI) use: deerflow.models.openai_codex_provider:CodexChatModel model: gpt-5.4 supports_thinking: true supports_reasoning_effort: true - name: claude-sonnet-4.6 display_name: Claude Sonnet 4.6 (Claude Code OAuth) use: deerflow.models.claude_provider:ClaudeChatModel model: claude-sonnet-4-6 max_tokens: 4096 supports_thinking: true - Codex CLI reads ~/.codex/auth.json - Claude Code accepts CLAUDE_CODE_OAUTH_TOKEN ,ANTHROPIC_AUTH_TOKEN ,CLAUDE_CODE_CREDENTIALS_PATH , or~/.claude/.credentials.json - ACP agent entries are separate from model providers — if you configure acp_agents.codex , point it at a Codex ACP adapter such asnpx -y @zed-industries/codex-acp - On macOS, export Claude Code auth explicitly if needed: eval "$(python3 scripts/export_claude_code_oauth.py --print-export)" API keys can also be set manually in .env (recommended) or exported in your shell:OPENAI_API_KEY=your-openai-api-key TAVILY_API_KEY=your-tavily-api-key - Codex CLI reads Use the table below as a practical starting point when choosing how to run DeerFlow: | Deployment target | Starting point | Recommended | Notes | |---|---|---|---| Local evaluation / make dev | 4 vCPU, 8 GB RAM, 20 GB free SSD | 8 vCPU, 16 GB RAM | Good for one developer or one light session with hosted model APIs. 2 vCPU / 4 GB is usually not enough. | Docker development / make docker-start | 4 vCPU, 8 GB RAM, 25 GB free SSD | 8 vCPU, 16 GB RAM | Image builds, bind mounts, and sandbox containers need more headroom than pure local dev. | Long-running server / make up | 8 vCPU, 16 GB RAM, 40 GB free SSD | 16 vCPU, 32 GB RAM | Preferred for shared use, multi-agent runs, report generation, or heavier sandbox workloads. | - These numbers cover DeerFlow itself. If you also host a local LLM, size that service separately. - Linux plus Docker is the recommended deployment target for a persistent server. macOS and Windows are best treated as development or evaluation environments. - If CPU or memory usage stays pinned, reduce concurrent runs first, then move to the next sizing tier. Development (hot-reload, source mounts): make docker-init # Pull sandbox image (only once or when image updates) make docker-start # Start services (auto-detects sandbox mode from config.yaml) make docker-start starts provisioner only when config.yaml uses provisioner mode (sandbox.use: deerflow.community.aio_sandbox:AioSandboxProvider with provisioner_url ). Docker builds use the upstream uv registry by default. If you need faster mirrors in restricted networks, export UV_INDEX_URL=https://pypi.tuna.tsinghua.edu.cn/simple and NPM_REGISTRY=https://registry.npmmirror.com before running make docker-init or make docker-start . Backend processes automatically pick up config.yaml changes on the next config access, so model metadata updates do not require a manual restart during development. Tip On Linux, if Docker-based commands fail with permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock , add your user to the docker group and re-login before retrying. See CONTRIBUTING.md for the full fix. Production (builds images locally, mounts runtime config and data): make up # Build images and start all production services make down # Stop and remove containers Note The LangGraph agent server currently runs via langgraph dev (the open-source CLI server). Access: http://localhost:2026 See CONTRIBUTING.md for detailed Docker development guide. If you prefer running services locally: Prerequisite: complete the "Configuration" steps above first (make setup ). make dev requires a valid config.yaml in the project root (can be overridden via DEER_FLOW_CONFIG_PATH ). Run make doctor to verify your setup before starting. On Windows, run the local development flow from Git Bash. Native cmd.exe and PowerShell shells are not supported for the bash-based service scripts, and WSL is not guaranteed because some scripts rely on Git for Windows utilities such as cygpath . - Check prerequisites: make check # Verifies Node.js 22+, pnpm, uv, nginx - Install dependencies: make install # Install backend + frontend dependencies - (Optional) Pre-pull sandbox image: # Recommended if using Docker/Container-based sandbox make setup-sandbox - (Optional) Load sample memory data for local review: python scripts/load_memory_sample.py This copies the sample fixture into the default local runtime memory file so reviewers can immediately test Settings > Memory . See backend/docs/MEMORY_SETTINGS_REVIEW.md for the shortest review flow. - Start services: make dev - Access: http://localhost:2026 DeerFlow supports multiple startup modes across two dimensions: - Dev / Prod — dev enables hot-reload; prod uses pre-built frontend - Standard / Gateway — standard uses a separate LangGraph server (4 processes); Gateway mode (experimental) embeds the agent runtime in the Gateway API (3 processes) | Local Foreground | Local Daemon | Docker Dev | Docker Prod | | |---|---|---|---|---| | Dev | ./scripts/serve.sh --dev make dev | ./scripts/serve.sh --dev --daemon make dev-daemon | ./scripts/docker.sh start make docker-start | — | | Dev + Gateway | ./scripts/serve.sh --dev --gateway make dev-pro | ./scripts/serve.sh --dev --gateway --daemon make dev-daemon-pro | ./scripts/docker.sh start --gateway make docker-start-pro | — | | Prod | ./scripts/serve.sh --prod make start | ./scripts/serve.sh --prod --daemon make start-daemon | — | ./scripts/deploy.sh make up | | Prod + Gateway | ./scripts/serve.sh --prod --gateway make start-pro | ./scripts/serve.sh --prod --gateway --daemon make start-daemon-pro | — | ./scripts/deploy.sh --gateway make up-pro | | Action | Local | Docker Dev | Docker Prod | |---|---|---|---| | Stop | ./scripts/serve.sh --stop make stop | ./scripts/docker.sh stop make docker-stop | ./scripts/deploy.sh down make down | | Restart | ./scripts/serve.sh --restart [flags] | ./scripts/docker.sh restart | — | Gateway mode eliminates the LangGraph server process — the Gateway API handles agent execution directly via async tasks, managing its own concurrency. In standard mode, DeerFlow runs a dedicated LangGraph Platform server alongside the Gateway API. This architecture works well but has trade-offs: | Standard Mode | Gateway Mode | | |---|---|---| | Architecture | Gateway (REST API) + LangGraph (agent runtime) | Gateway embeds agent runtime | | Concurrency | --n-jobs-per-worker per worker (requires license) | --workers × async tasks (no per-worker cap) | | Containers / Processes | 4 (frontend, gateway, langgraph, nginx) | 3 (frontend, gateway, nginx) | | Resource usage | Higher (two Python runtimes) | Lower (single Python runtime) | | LangGraph Platform license | Required for production images | Not required | | Cold start | Slower (two services to initialize) | Faster | Both modes are functionally equivalent — the same agents, tools, and skills work in either mode. deploy.sh supports building and starting separately. Images are mode-agnostic — runtime mode is selected at start time: # One-step (build + start) deploy.sh # standard mode (default) deploy.sh --gateway # gateway mode # Two-step (build once, start with any mode) deploy.sh build # build all images deploy.sh start # start in standard mode deploy.sh start --gateway # start in gateway mode # Stop deploy.sh down DeerFlow supports multiple sandbox execution modes: - Local Execution (runs sandbox code directly on the host machine) - Docker Execution (runs sandbox code in isolated Docker containers) - Docker Execution with Kubernetes (runs sandbox code in Kubernetes pods via provisioner service) For Docker development, service startup follows config.yaml sandbox mode. In Local/Docker modes, provisioner is not started. See the Sandbox Configuration Guide to configure your preferred mode. DeerFlow supports configurable MCP servers and skills to extend its capabilities. For HTTP/SSE MCP servers, OAuth token flows are supported (client_credentials , refresh_token ). See the MCP Server Guide for detailed instructions. DeerFlow supports receiving tasks from messaging apps. Channels auto-start when configured — no public IP required for any of them. | Channel | Transport | Difficulty | |---|---|---| | Telegram | Bot API (long-polling) | Easy | | Slack | Socket Mode | Moderate | | Feishu / Lark | WebSocket | Moderate | | Tencent iLink (long-polling) | Moderate | | | WeCom | WebSocket | Moderate | Configuration in config.yaml : channels: # LangGraph Server URL (default: http://localhost:2024) langgraph_url: http://localhost:2024 # Gateway API URL (default: http://localhost:8001) gateway_url: http://localhost:8001 # Optional: global session defaults for all mobile channels session: assistant_id: lead_agent # or a custom agent name; custom agents are routed via lead_agent + agent_name config: recursion_limit: 100 context: thinking_enabled: true is_plan_mode: false subagent_enabled: false feishu: enabled: true app_id: $FEISHU_APP_ID app_secret: $FEISHU_APP_SECRET # domain: https://open.feishu.cn # China (default) # domain: https://open.larksuite.com # International wecom: enabled: true bot_id: $WECOM_BOT_ID bot_secret: $WECOM_BOT_SECRET slack: enabled: true bot_token: $SLACK_BOT_TOKEN # xoxb-... app_token: $SLACK_APP_TOKEN # xapp-... (Socket Mode) allowed_users: [] # empty = allow all telegram: enabled: true bot_token: $TELEGRAM_BOT_TOKEN allowed_users: [] # empty = allow all wechat: enabled: false bot_token: $WECHAT_BOT_TOKEN ilink_bot_id: $WECHAT_ILINK_BOT_ID qrcode_login_enabled: true # optional: allow first-time QR bootstrap when bot_token is absent allowed_users: [] # empty = allow all polling_timeout: 35 state_dir: ./.deer-flow/wechat/state max_inbound_image_bytes: 20971520 max_outbound_image_bytes: 20971520 max_inbound_file_bytes: 52428800 max_outbound_file_bytes: 52428800 # Optional: per-channel / per-user session settings session: assistant_id: mobile-agent # custom agent names are also supported here context: thinking_enabled: false users: "123456789": assistant_id: vip-agent config: recursion_limit: 150 context: thinking_enabled: true subagent_enabled: true Notes: assistant_id: lead_agent calls the default LangGraph assistant directly.- If assistant_id is set to a custom agent name, DeerFlow still routes throughlead_agent and injects that value asagent_name , so the custom agent's SOUL/config takes effect for IM channels. Set the corresponding API keys in your .env file: # Telegram TELEGRAM_BOT_TOKEN=123456789:ABCdefGHIjklMNOpqrSTUvwxYZ # Slack SLACK_BOT_TOKEN=xoxb-... SLACK_APP_TOKEN=xapp-... # Feishu / Lark FEISHU_APP_ID=cli_xxxx FEISHU_APP_SECRET=your_app_secret # WeChat iLink WECHAT_BOT_TOKEN=your_i
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유