Show HN: Lightport – LLM 제공업체를 OpenAI와 호환되게 만드는 AI 게이트웨이

hackernews | | 📦 오픈소스
#ai 모델 #anthropic #openai #smart routing #자동 장애조치
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

Lightport는 LLM 공급자의 요청을 OpenAI 호환 방식으로 변환해주는 가볍고 특화된 AI 게이트웨이입니다. OpenAI, Anthropic, Google 등 총 77개 이상의 공급자를 지원하며, 포트키 AI 게이트웨이에서 분리되어 요청 및 응답 변환 기능에만 집중하도록 개발되었습니다. 이 프로젝트는 재시도나 캐싱 같은 운영 기능을 제외하고 오직 API 호환성 확보라는 단일 목적을 달성하는 데 중점을 둡니다.

본문

A lightweight AI gateway that makes LLM providers OpenAI-compatible. Lightport does one thing: it accepts OpenAI-compatible requests, transforms them for the target provider, and returns the response. That's it. Retries, secret management, caching, rate limiting, and other operational concerns are explicitly non-goals. Those are better handled either at a service layer above Lightport or as custom middleware. Supported endpoints: POST /v1/chat/completions POST /v1/completions POST /v1/responses (+ GET, DELETE, input_items) Supported providers: OpenAI, Anthropic, Azure OpenAI, Google Gemini, Vertex AI, Bedrock, Cohere, Mistral, Groq, Deepseek, Together AI, Fireworks, Ollama, and more (77 total). Lightport started as a fork of Portkey AI Gateway. Our sole use case for the gateway has always been making AI providers OpenAI-compatible – we only needed the request/response transformation layer. Since then, Portkey has evolved into a full-featured AI gateway with guardrails, fallbacks, automatic retries, load balancing, request timeouts, smart caching, usage analytics, cost management, and more. We believe those capabilities belong at a higher abstraction level – which is what Glama provides – rather than in the gateway itself. Since forking, we have fixed numerous bugs, added integration tests for every provider, and continue to actively maintain the gateway as it directly powers Glama. If you need a lightweight proxy that makes LLM providers OpenAI-compatible, Lightport is for you. If you need an enterprise gateway with all the bells and whistles, consider Portkey Gateway. pnpx lightport The gateway runs on http://localhost:8787 . pnpm install pnpm dev curl http://localhost:8787/v1/chat/completions \ -H "Content-Type: application/json" \ -H "x-lightport-provider: openai" \ -H "Authorization: Bearer sk-YOUR-KEY" \ -d '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello!"}] }' Set the provider via x-lightport-provider header and pass credentials via Authorization (or provider-specific headers like x-api-key for Anthropic). Some providers require additional headers: | Provider | Headers | |---|---| | Azure OpenAI | x-lightport-azure-resource-name , x-lightport-azure-deployment-id , x-lightport-azure-api-version | | Bedrock | x-lightport-aws-access-key-id , x-lightport-aws-secret-access-key , x-lightport-aws-region | | Vertex AI | x-lightport-vertex-project-id , x-lightport-vertex-region | | Custom host | x-lightport-custom-host | Route provider requests through an HTTP proxy by setting the x-lightport-proxy-url header: curl http://localhost:8787/v1/chat/completions \ -H "Content-Type: application/json" \ -H "x-lightport-provider: openai" \ -H "x-lightport-proxy-url: http://user:[email protected]:8080" \ -H "Authorization: Bearer sk-YOUR-KEY" \ -d '{ "model": "gpt-4o-mini", "messages": [{"role": "user", "content": "Hello!"}] }' pnpm dev # Development server with hot reload pnpm build # Production build pnpm start:node # Start production server pnpm test # Run tests pnpm lint # Lint code pnpm format # Format and auto-fix pnpm knip # Find unused code/dependencies Copy .env.example to .env and fill in API keys for the providers you want to test. Tests automatically load .env and skip providers without credentials. cp .env.example .env # fill in your keys pnpm test Request -> bodyParser middleware (parse JSON/FormData) -> requestValidator (require provider header) -> handler (chatCompletions / completions / modelResponses) -> constructConfigFromRequestHeaders() -> tryPost() -> adapter transform (if needed for responses/messages API) -> provider lookup + transformToProviderRequest() -> fetch to provider -> responseHandler() (transform response back) -> Response The provider system (src/providers/ ) contains 77 provider implementations. Each defines: - API config (base URL, endpoints, headers) - Request parameter transforms - Response transforms (streaming + non-streaming)

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →