Claude Code in Rust, Python, Go, Open source

hackernews | | 📦 오픈소스
#ai 딜 #ai 코딩 #anthropic #chatgpt #claude #claude code #cli #gemini #gpt-4 #gpt-5 #llama #mistral #openai #리버스엔지니어링 #오픈소스 #멀티 프로바이더
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

오픈소스 기반의 다중 제공자 지원 AI 코딩 에이전트 CLI 도구가 공개되어 Anthropic, OpenAI 등 총 13개의 백엔드를 단일 환경에서 교체하여 사용할 수 있습니다. 이 도구는 스트리밍, 도구 호출, 에이전트 루프 등의 핵심 기능을 지원하며 프롬프트 캐싱을 통해 60~70%의 비용 절감 효과를 제공합니다. 또한 2단계 LLM 보안 분류기를 통한 강력한 보안 시스템과 45개 이상의 스킬을 설치할 수 있는 마켓플레이스 기능을 갖추고 있어 개발 편의성을 크게 높였습니다.

본문

Open-source Claude Code alternative — multi-provider AI coding agent CLI. Single file, 13 providers, zero native deps. # npm (recommended) npm install -g cloclo # Or run directly npx cloclo # Or clone and run git clone https://github.com/SeifBenayed/claude-code-sdk.git cd claude-code-sdk && node claude-native.mjs # Login (opens browser, saves token to macOS keychain) cloclo --login # Anthropic (Pro/Max subscription) cloclo --openai-login # OpenAI (ChatGPT Plus/Pro subscription) # Interactive REPL cloclo # Default: Claude Sonnet cloclo -m codex # OpenAI Codex cloclo -m ollama/llama3.2 # Local Ollama # One-shot cloclo -p "explain this code" cloclo -m gpt-5.4 -p "fix the bug in main.js" cat error.log | cloclo -p "explain this error" # Programmatic (NDJSON bridge) echo '{"type":"message","content":"hello"}' | cloclo --ndjson | Provider | Models | Auth | Status | |---|---|---|---| | Anthropic | claude-sonnet-4-6, claude-opus-4-6, claude-haiku-4-5 | ANTHROPIC_API_KEY or --login | Tested | | OpenAI Chat | gpt-5.4, gpt-4.1, gpt-4o, o3, o4-mini | OPENAI_API_KEY or --openai-login | Tested | | OpenAI Responses | gpt-5.3-codex | OPENAI_API_KEY or --openai-login | Tested | | Google Gemini | gemini-2.5-flash, gemini-2.5-pro | GOOGLE_API_KEY | Supported | | DeepSeek | deepseek-chat, deepseek-coder | DEEPSEEK_API_KEY | Supported | | Mistral | mistral-small-latest, codestral-latest | MISTRAL_API_KEY | Supported | | Groq | llama-3.3-70b-versatile, mixtral-8x7b | GROQ_API_KEY | Supported | | Ollama | ollama/* (any pulled model) | None (local) | Supported | | LM Studio | lmstudio/* | None (local) | Supported | | vLLM | vllm/* | None (local) | Supported | | Jan | jan/* | None (local) | Supported | | llama.cpp | llamacpp/* | None (local) | Supported | Switch providers live in the REPL: /model codex , /model sonnet , /model ollama/llama3.2 | Alias | Resolves to | Backend | |---|---|---| sonnet | claude-sonnet-4-6 | Anthropic | opus | claude-opus-4-6 | Anthropic | haiku | claude-haiku-4-5 | Anthropic | codex | gpt-5.3-codex | OpenAI Responses | gpt5 / 5.4 | gpt-5.4 | OpenAI Chat | 4o | gpt-4o | OpenAI Chat | o3 | o3 | OpenAI Chat | o4-mini | o4-mini | OpenAI Chat | - Multi-provider: 13 backends, one CLI. Switch mid-conversation. - Full agent loop: streaming, tool calling, multi-turn, sub-agents with fork mode - Built-in tools: Bash, Read, Write, Edit, Glob, Grep, WebFetch, WebSearch, Agent, MemoryShare - Extended thinking: --thinking 8192 for Claude Sonnet/Opus - MCP integration: --mcp-config servers.json for external tool servers - Permissions: 6 modes from auto tobypassPermissions , with LLM security classifier - Session management: resume, checkpoints, rewind - Memory system: persistent cross-session memory with auto-suggest - Shareable moments: capture and share interesting exchanges ( /share ) as markdown, HTML, SVG - Skills marketplace: browse and install 45+ skills ( /marketplace ) - Skills & hooks: extensible slash commands, 13 lifecycle hooks (command, webhook, prompt, agent) - NDJSON bridge: programmatic use from any language - Ink UI: rich terminal interface with streaming output, markdown rendering, permission dialogs, tool spinners, syntax highlighting, history navigation, and context tracking - Prompt caching: automatic cache_control on system prompt and last user message for 60-70% cost reduction - Security: 2-stage LLM classifier (28 BLOCK rules, 7 ALLOW exceptions, user intent analysis), sandbox support /model [name] Switch model (live backend switching) /thinking [budget] Toggle extended thinking /compact Compress conversation to save context /permissions Change permission mode /memory Show saved memories /share [N] Capture last exchange as shareable moment (md/html/svg) /shares Browse past shared moments /marketplace Browse and install skills from the registry /catalog Browse and install tools from the catalog /checkpoints List file checkpoints /rewind Restore files to a checkpoint /sessions List recent sessions /cost Show session cost /webhook Manage webhook hooks /clear New session /login Login to Anthropic /openai-login Login to OpenAI /exit Quit -p, --print One-shot mode -m, --model Model name or alias --provider Force a specific provider --ndjson NDJSON bridge mode (stdin/stdout) --output Output format: text (default) or json --json Shorthand for --output json --output-version Lock JSON output schema version (default: 1) -y, --yes Skip all permission prompts --timeout Global timeout (exit code 5 if exceeded) --login Anthropic OAuth login --openai-login OpenAI OAuth login --api-key Anthropic API key --openai-api-key OpenAI API key --permission-mode auto|default|plan|acceptEdits|bypassPermissions|dontAsk --max-turns Max agent turns (default: 25) --max-tokens Max output tokens (default: 16384) --thinking Extended thinking token budget --mcp-config MCP servers config file --resume Resume most recent session --session-id Resume specific session --verbose Debug logging | Code | Meaning | |---|---| | 0 | Success |

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →