HN 표시: Claude-codex-proxy – ChatGPT 구독으로 Claude 코드 사용

hackernews | | 📦 오픈소스
#claude code #llama #mistral #멀티 llm #오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

claude-codex-proxy는 Anthropic의 사용 제한이 엄격한 점을 보완하기 위해, Claude Code가 OpenAI의 ChatGPT Plus나 Pro 구독을 사용할 수 있게 해주는 도구입니다. 이 프록시는 Anthropic API 요청을 OpenAI의 Codex 백엔드로 변환하여 전달하며, 사용자는 로그인 절차를 통해 계정 인증을 거쳐야 합니다. 다만 비공식 클라이언트를 통한 백엔드 사용이라는 점에서 서비스 약관의 영역이 모호하며, 이미지 입력 처리나 추론 블록 전달 등에 일부 기능적 제약이 있습니다.

본문

claude-codex-proxy lets you use Claude Code with your ChatGPT Plus or Pro subscription. Quick start · How it works · Configuration · Limitations I feel Claude Code is still the best harness around, despite occasional frustrations caused by updates. However, Anthropic keeps tightening the usage limits, while OpenAI is still much more generous. If you want to use OpenAI plans, your best options seem to be OpenCode and Codex. I tried OpenCode, but the UX has many rough edges, especially around skills feeling like a second-class feature. Fortunately it's open source and I ended up forking it and applying some patches, but would much rather not do it. Homebrew (macOS and Linux): brew install raine/claude-codex-proxy/claude-codex-proxy Install script (macOS and Linux): curl -fsSL https://raw.githubusercontent.com/raine/claude-codex-proxy/main/scripts/install.sh | bash Manual: download a prebuilt binary for your platform from the releases page. Open a browser (PKCE flow): claude-codex-proxy auth login Or, on a headless machine (device code flow): claude-codex-proxy auth device Either command prints a URL. Sign in with your ChatGPT Plus/Pro account. On macOS, credentials are stored in Keychain. On other platforms, they are stored locally for reuse by the proxy. Verify it stuck: claude-codex-proxy auth status claude-codex-proxy serve Defaults to http://127.0.0.1:18765 (loopback only). Override with PORT=11435 claude-codex-proxy serve . One-shot: ANTHROPIC_BASE_URL=http://localhost:18765 \ ANTHROPIC_AUTH_TOKEN=unused \ ANTHROPIC_MODEL=gpt-5.4 \ CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 \ claude Or set it persistently in ~/.claude/settings.json : { "env": { "ANTHROPIC_BASE_URL": "http://127.0.0.1:18765", "ANTHROPIC_AUTH_TOKEN": "unused", "ANTHROPIC_MODEL": "gpt-5.4", "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": 1 } } Set ANTHROPIC_MODEL to a model your ChatGPT subscription is allowed to use. Confirmed working on Plus: gpt-5.4 gpt-5.3-codex Also verified working on this project: gpt-5.2 gpt-5.4-mini The proxy also accepts a small set of convenience aliases and resolves them before calling the upstream Codex backend: haiku ,claude-haiku-4-5 ,claude-haiku-4-5-20251001 →gpt-5.4-mini sonnet ,claude-sonnet-4-6 →gpt-5.4 opus ,claude-opus-4-7 →gpt-5.4 These aliases are only shorthand for portability and cheaper subagent configs; they are not semantic equivalents of the Claude models they resemble. If the resolved model isn't supported by your account, upstream returns a 400 like "The 'gpt-4.1' model is not supported when using Codex with a ChatGPT account." . The proxy surfaces that verbatim. For example, you can now point Claude Code or a subagent at the proxy with: ANTHROPIC_MODEL=haiku claude and the proxy will send gpt-5.4-mini upstream. sequenceDiagram autonumber participant CC as Claude Code participant P as claude-codex-proxy participant AUTH as auth.openai.com participant CGX as chatgpt.com/backend-api/codex Note over P,AUTH: One-time: PKCE or device OAuthtokens cached locally for reuse CC->>P: POST /v1/messages (Anthropic shape, stream: true) alt access token expiring P->>AUTH: POST /oauth/token (refresh_token) AUTH-->>P: new access + refresh end P->>P: translate request• strip max_tokens / temperature / cache_control• system blocks → top-level "instructions"• tool_use.id = call_id (identity)• prompt_cache_key = session id P->>CGX: POST /responsesBearer + ChatGPT-Account-Id + originator=claude-codex-proxy CGX-->>P: Responses API SSE(output_item.*, output_text.delta, function_call_arguments.*, codex.rate_limits, response.completed) P->>P: reducer: typed events(text/tool start/delta/stop, finish) P-->>CC: Anthropic SSE(message_start, content_block_*, message_delta, message_stop) | Command | Description | |---|---| serve | Start the proxy on PORT (default 18765) | auth login | Browser OAuth (PKCE) | auth device | Device-code OAuth (headless) | auth status | Show account ID and token expiry | auth logout | Delete stored auth credentials | Starts the HTTP proxy and blocks. Binds to 127.0.0.1 only. Logs to $XDG_STATE_HOME/claude-codex-proxy/proxy.log (rotated at 20 MiB). Set CCP_LOG_STDERR=1 to mirror log lines to stderr while running. claude-codex-proxy serve PORT=11435 claude-codex-proxy serve CCP_LOG_STDERR=1 claude-codex-proxy serve Prints the exact ANTHROPIC_BASE_URL / ANTHROPIC_MODEL env vars to export on startup. Refuses to start traffic until auth login (or auth device ) has stored a token. Runs the PKCE browser flow against auth.openai.com using the Codex CLI's client ID. Prints a URL, opens a local callback listener on port 1455, waits for the browser to redirect back, and stores the resulting access / refresh tokens in Keychain on macOS or locally on other platforms. The process exits automatically once the tokens are saved. claude-codex-proxy auth login Sign in with your ChatGPT Plus/Pro account, not an OpenAI API account. The token file includes the extracted chatgpt_account_id so the proxy can set the ChatGPT-Account-Id header on every upstream call. Same OAuth flow, but for headless machines. Prints a short user code and a URL; you enter the code from any browser on any other device, and the CLI polls auth.openai.com until you authorize, then stores the token. claude-codex-proxy auth device Useful over SSH, inside a container, or on any host that can't open a browser. Shows whether credentials are stored, the account ID, and how long until the access token expires. Non-zero exit if no auth is present. claude-codex-proxy auth status Example output: Account: 79342a5e-57b7-44ea-bfdc-a83ba070dad6 Expires: 2026-04-28T16:46:04.827Z (in 863946s) Storage: macOS Keychain The proxy refreshes the access token 5 minutes before expiry with a single-flight guard, so concurrent requests never trigger stampedes of refresh calls. Removes stored auth credentials. On macOS this deletes the Keychain entry. No server call is needed; the refresh token just becomes dead. claude-codex-proxy auth logout Run auth login again to re-authenticate. The proxy speaks enough of the Anthropic API for Claude Code: POST /v1/messages : the main turn endpoint (streaming and non-streaming)POST /v1/messages?beta=true : same (Claude Code always sends?beta=true )POST /v1/messages/count_tokens : local token count viagpt-tokenizer (o200k_base); used by Claude Code's compaction logicGET /healthz : liveness check Settings are environment variables on the proxy process, not a config file. | Variable | Default | Purpose | |---|---|---| PORT | 18765 | Proxy listen port | XDG_STATE_HOME | ~/.local/state | Base dir for proxy.log | CCP_LOG_STDERR | unset | Also mirror log lines to stderr | CCP_LOG_VERBOSE | unset | Log full request/response bodies + every SSE event | $XDG_STATE_HOME/claude-codex-proxy/proxy.log : JSON-lines log, rotated at 20 MiB. Secrets (authorization ,access ,refresh ,id_token ,ChatGPT-Account-Id , …) are redacted before write. - Terms of service: using the Codex backend from a non-official client is the same gray area OpenCode occupies, although OpenAI seems to be cool with OpenCode. Use at your own risk. - Rate limits: shared across all clients of your ChatGPT account. codex.rate_limits.limit_reached is surfaced as HTTP 429 withretry-after . - Image inputs in tool results: Responses API function_call_output only takes a string, so image blocks nested insidetool_result are replaced with a[image omitted: ] placeholder. Top-level user-message images do pass through. - Reasoning blocks: not forwarded to Claude Code (dropped), even if the upstream model produced them. - Session title generation: Claude Code's parallel title-gen request is forwarded to Codex like any other structured-output request. This costs a handful of tokens per session rather than being stubbed. - OpenAI-flavored output_config.format : translated to Responses APItext.format (json_schema withstrict: true ); other Anthropic-specificoutput_config fields are dropped. bunx tsc --noEmit # typecheck bun src/cli.ts serve # run locally tail -f ~/.local/state/claude-codex-proxy/proxy.log | jq . Install a compiled dev build globally: compile the current working tree to a binary and place it on your PATH without linking: mkdir -p ~/.local/bin bun build ./src/cli.ts --compile --outfile ~/.local/bin/claude-codex-proxy - claude-history: search Claude Code conversation history from the terminal - git-surgeon: non-interactive hunk-level git staging for AI agents - workmux: manage parallel AI coding tasks in separate git worktrees with tmux

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →