OpenClaude – 유출된 Claude 코드 소스를 포크하여 모든 LLM에서 작동하도록 만들었습니다.
hackernews
|
|
📦 오픈소스
#ai 딜
#anthropic
#claude
#claude code
#gemini
#gpt-4
#llama
#mistral
#openai
#openclaude
#멀티 llm
#오픈소스
#코딩 에이전트
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
유출된 클로드 코드(Claude Code) 소스코드를 포크하여 개발된 오픈소스 코딩 에이전트 CLI인 'OpenClaude'가 공개되었습니다. 이 도구는 OpenAI, Gemini, 깃허브 모델, Ollama 등 클라우드 및 로컬 환경의 다양한 LLM 백엔드를 하나의 터미널 환경에서 사용할 수 있도록 지원합니다. 사용자는 설정 파일(JSON)을 통해 각 에이전트의 역할에 맞춰 모델을 분산 라우팅하여 비용을 최적화할 수 있으며, DuckDuckGo나 Firecrawl API를 연동하여 웹 검색 기능도 활용할 수 있습니다.
본문
OpenClaude is an open-source coding-agent CLI for cloud and local model providers. Use OpenAI-compatible APIs, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported backends while keeping one terminal-first workflow: prompts, tools, agents, MCP, slash commands, and streaming output. Quick Start | Setup Guides | Providers | Source Build | VS Code Extension | Community - Use one CLI across cloud APIs and local model backends - Save provider profiles inside the app with /provider - Run with OpenAI-compatible services, Gemini, GitHub Models, Codex, Ollama, Atomic Chat, and other supported providers - Keep coding-agent workflows in one place: bash, file tools, grep, glob, agents, tasks, MCP, and web tools - Use the bundled VS Code extension for launch integration and theme support npm install -g @gitlawb/openclaude If the install later reports ripgrep not found , install ripgrep system-wide and confirm rg --version works in the same terminal before starting OpenClaude. openclaude Inside OpenClaude: - run /provider for guided provider setup and saved profiles - run /onboard-github for GitHub Models onboarding macOS / Linux: export CLAUDE_CODE_USE_OPENAI=1 export OPENAI_API_KEY=sk-your-key-here export OPENAI_MODEL=gpt-4o openclaude Windows PowerShell: $env:CLAUDE_CODE_USE_OPENAI="1" $env:OPENAI_API_KEY="sk-your-key-here" $env:OPENAI_MODEL="gpt-4o" openclaude macOS / Linux: export CLAUDE_CODE_USE_OPENAI=1 export OPENAI_BASE_URL=http://localhost:11434/v1 export OPENAI_MODEL=qwen2.5-coder:7b openclaude Windows PowerShell: $env:CLAUDE_CODE_USE_OPENAI="1" $env:OPENAI_BASE_URL="http://localhost:11434/v1" $env:OPENAI_MODEL="qwen2.5-coder:7b" openclaude Beginner-friendly guides: Advanced and source-build guides: | Provider | Setup Path | Notes | |---|---|---| | OpenAI-compatible | /provider or env vars | Works with OpenAI, OpenRouter, DeepSeek, Groq, Mistral, LM Studio, and other compatible /v1 servers | | Gemini | /provider or env vars | Supports API key, access token, or local ADC workflow on current main | | GitHub Models | /onboard-github | Interactive onboarding with saved credentials | | Codex | /provider | Uses existing Codex credentials when available | | Ollama | /provider or env vars | Local inference with no API key | | Atomic Chat | advanced setup | Local Apple Silicon backend | | Bedrock / Vertex / Foundry | env vars | Additional provider integrations for supported environments | - Tool-driven coding workflows: Bash, file read/write/edit, grep, glob, agents, tasks, MCP, and slash commands - Streaming responses: Real-time token output and tool progress - Tool calling: Multi-step tool loops with model calls, tool execution, and follow-up responses - Images: URL and base64 image inputs for providers that support vision - Provider profiles: Guided setup plus saved .openclaude-profile.json support - Local and remote model backends: Cloud APIs, local servers, and Apple Silicon local inference OpenClaude supports multiple providers, but behavior is not identical across all of them. - Anthropic-specific features may not exist on other providers - Tool quality depends heavily on the selected model - Smaller local models can struggle with long multi-step tool flows - Some providers impose lower output caps than the CLI defaults, and OpenClaude adapts where possible For best results, use models with strong tool/function calling support. OpenClaude can route different agents to different models through settings-based routing. This is useful for cost optimization or splitting work by model strength. Add to ~/.claude/settings.json : { "agentModels": { "deepseek-chat": { "base_url": "https://api.deepseek.com/v1", "api_key": "sk-your-key" }, "gpt-4o": { "base_url": "https://api.openai.com/v1", "api_key": "sk-your-key" } }, "agentRouting": { "Explore": "deepseek-chat", "Plan": "gpt-4o", "general-purpose": "gpt-4o", "frontend-dev": "deepseek-chat", "default": "gpt-4o" } } When no routing match is found, the global provider remains the fallback. Note: api_key values insettings.json are stored in plaintext. Keep this file private and do not commit it to version control. By default, WebSearch works on non-Anthropic models using DuckDuckGo. This gives GPT-4o, DeepSeek, Gemini, Ollama, and other OpenAI-compatible providers a free web search path out of the box. Note: DuckDuckGo fallback works by scraping search results and may be rate-limited, blocked, or subject to DuckDuckGo's Terms of Service. If you want a more reliable supported option, configure Firecrawl. For Anthropic-native backends and Codex responses, OpenClaude keeps the native provider web search behavior. WebFetch works, but its basic HTTP plus HTML-to-markdown path can still fail on JavaScript-rendered sites or sites that block plain HTTP requests. Set a Firecrawl API key if you want Firecrawl-powered search/fetch behavior: export FIRECRAWL_API_KEY=your-key-here With Firecrawl enabled: WebSearch can use Firecrawl's search API while DuckDuckGo remains
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유