모든 OpenAI 호환 LLM에서 작동하는 Claude Code 포크
hackernews
|
|
📦 오픈소스
#ai 도구
#ai 딜
#anthropic
#claude
#claude code
#gpt-5
#llama
#mistral
#openai
#openai 호환
#멀티 llm
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
기존 클로드 코드(Claude Code) v2.1.88을 포크하여 개발된 'claude-code-any'는 오픈AI, 올라마(Ollama), 딥시크(DeepSeek) 등 다양한 LLM 백엔드를 지원하는 오픈소스 에이전트 도구입니다. 사용자는 단일 환경변수 설정만으로 원하는 AI 모델을 연동하여 파일 편집 및 검색 등 클로드 코드의 풍부한 에이전트 툴체인을 그대로 활용할 수 있습니다. 특히 프롬프트의 키워드를 분석해 코드 작성, 버그 수정, 요약 등 작업 유형에 맞춰 최적의 모델로 자동 라우팅해 주는 스마트 기능과 OpenClaw 연동을 위한 ACP 프로토콜을 제공합니다.
본문
A fork of Claude Code v2.1.88 with multi-LLM support, provider profiles, smart task routing, and ACP protocol for OpenClaw integration. Use any LLM backend — OpenAI, Ollama, LM Studio, vLLM, Together AI, Groq, or any OpenAI-compatible server — with the full Claude Code agent toolchain (file editing, bash, grep, glob, multi-file planning). curl -fsSL https://raw.githubusercontent.com/jiangyurong609/claude-code-any/main/install.sh | bash git clone https://github.com/jiangyurong609/claude-code-any.git cd claude-code-any pnpm install --registry https://registry.npmjs.org bun run build.ts # Install globally as `claude-any` npm link claude-any --version # 2.1.88 (Claude Code) claude-any doctor # show diagnostics The fastest way to configure a backend. Set one env var and go: # OpenAI CLAUDE_ANY_PROFILE=openai OPENAI_API_KEY=sk-... claude-any # DeepSeek CLAUDE_ANY_PROFILE=deepseek OPENAI_API_KEY=sk-... claude-any # Kimi K2.5 CLAUDE_ANY_PROFILE=kimi OPENAI_API_KEY=sk-... claude-any # Ollama (local, free) CLAUDE_ANY_PROFILE=ollama claude-any # Groq (fast) CLAUDE_ANY_PROFILE=groq OPENAI_API_KEY=gsk_... claude-any # xAI Grok CLAUDE_ANY_PROFILE=xai OPENAI_API_KEY=xai-... claude-any # OpenRouter (access any model) CLAUDE_ANY_PROFILE=openrouter OPENAI_API_KEY=sk-or-... claude-any # Together AI / Mistral / LM Studio / vLLM CLAUDE_ANY_PROFILE=together OPENAI_API_KEY=... claude-any CLAUDE_ANY_PROFILE=mistral OPENAI_API_KEY=... claude-any CLAUDE_ANY_PROFILE=lmstudio claude-any CLAUDE_ANY_PROFILE=vllm claude-any # Custom endpoint CLAUDE_ANY_PROFILE=custom OPENAI_BASE_URL=https://your-server/v1 OPENAI_MODEL=your-model claude-any Each profile sets sensible defaults. Explicit env vars always override profile defaults. | Profile | Base URL | Default Model | API Key | |---|---|---|---| openai | api.openai.com | gpt-5.4 | Required | deepseek | api.deepseek.com | deepseek-chat | Required | kimi | api.moonshot.cn | kimi-k2.5 | Required | xai | api.x.ai | grok-4.20-beta | Required | openrouter | openrouter.ai | openai/gpt-5.4 | Required | ollama | localhost:11434 | qwen3.5 | Not needed | lmstudio | localhost:1234 | local-model | Not needed | vllm | localhost:8000 | default | Not needed | together | api.together.xyz | Qwen3.5-72B | Required | groq | api.groq.com | llama-3.3-70b | Required | mistral | api.mistral.ai | mistral-small-latest | Required | anthropic | api.anthropic.com | Claude default | Required | If you prefer explicit env vars over profiles: export CLAUDE_CODE_USE_OPENAI=1 export OPENAI_API_KEY="sk-..." export OPENAI_MODEL="gpt-5.4" claude-any export CLAUDE_CODE_USE_OPENAI=1 export OPENAI_BASE_URL="http://localhost:11434/v1" export OPENAI_MODEL="qwen3.5" claude-any export ANTHROPIC_API_KEY="sk-ant-api03-..." claude-any # Bedrock export CLAUDE_CODE_USE_BEDROCK=1 export AWS_REGION="us-east-1" # Vertex export CLAUDE_CODE_USE_VERTEX=1 export CLOUD_ML_REGION="us-east5" export ANTHROPIC_VERTEX_PROJECT_ID="your-project" # Foundry export CLAUDE_CODE_USE_FOUNDRY=1 export ANTHROPIC_FOUNDRY_BASE_URL="https://your-resource.services.ai.azure.com" Route different task types to different models/providers automatically. claude-any --profile balanced --print "fix the failing tests" claude-any --profile cheap --print "summarize this file" claude-any --profile private --print "review sensitive code" claude-any --profile best --print "design the new architecture" | Profile | Plan/Code | Fix/Review | Search/Summarize | |---|---|---|---| best | gpt-5.4-pro | gpt-5.4 | gpt-5.4 | balanced | gpt-5.4 | gpt-5.4-mini | gpt-5.4-nano | cheap | gpt-5.4-mini | gpt-5.4-nano | gpt-5.4-nano | private | qwen3.5 (local) | qwen3.5 (local) | qwen3.5 (local) | The router automatically detects task type from your prompt: | Keywords | Route | |---|---| | plan, design, approach | plan | | fix, bug, error, failing | fix | | review, PR, audit | review | | find, search, grep, where | search | | summarize, explain | summarize | | (default) | code | claude-any --route review --print "check this code" CLAUDE_ANY_DEBUG_ROUTING=1 claude-any --profile balanced --print "fix auth bug" # stderr: [routing] profile=balanced route=fix provider=openai-compatible model=gpt-5.4-mini Create ~/.claude-any/config.json or .claude-any.json in your project: { "defaultProfile": "balanced", "profiles": { "my-team": { "routes": { "plan": { "provider": "openai", "model": "gpt-5.4" }, "code": { "provider": "openai", "model": "gpt-5.4" }, "fix": { "provider": "ollama", "model": "qwen3.5", "baseURL": "http://localhost:11434/v1" } } } } } claude-code-any speaks ACP (Agent Communication Protocol) for structured integration with acpx and OpenClaw. # Direct ACP server claude-any acp --stdio # With acpx acpx --agent "claude-any acp --stdio" "Fix the failing tests" JSON-RPC 2.0 over stdin/stdout: # Initialize echo '{"jsonrpc":"2.0","id":1,"method":"initialize"}' | claude-any acp --stdio # Send prompt echo '{"jsonrpc":"2.0","id":2,"method":"prompt","params":{"prompt":"Fix auth","profile":"balanced"}}'
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유