HN 표시: 무거운 프레임워크 없이 Go에서 LLM 에이전트를 사용하도록 GAI를 만들었습니다.
hackernews
|
|
🏗️ 프레임워크
#gemini
#mistral
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
GAI는 대규모 언어 모델(LLM)을 기반으로 에이전트 애플리케이션을 구축할 수 있는 유연한 Go 라이브러리입니다. 이 라이브러리는 제공자와 모델에 대한 일반 인터페이스와 대화 관리, 도구 실행을 위한 루프를 제공합니다. Go 1.26.1 이상이 필요하며, Gemini와 Mistral 등 다양한 모델 제공자를 동적으로 관리할 수 있는 기능을 포함합니다.
본문
GAI is a flexible Go library for building agent-style applications on top of LLMs. It provides a generic interface for providers and models, prompt and context helpers, and a loop for agentic-calling workflows. The library is organized around three ideas: - 🧩 ai defines the core provider, model, request, and response abstractions. - 🗂️ context stores conversations, renders message history, and loads prompt files. - 🔁 loop runs iterative model and tool execution when a model returns a tool call. - Go 1.26.1 or newer - API credentials for whichever provider you use go get github.com/lace-ai/gai To start, first create a provider. For example, for Gemini: geminiProvider := gemini.New("your_api_key") 🗂️ Use the Model Repository to manage multiple providers and dynamic model selection You can use a ModelRepository to register multiple providers and look up models by name across providers. modelRepo := ai.NewModelRepository() err := modelRepo.RegisterProvider(geminiProvider) if err != nil { // handle error } To get a model from the repo just use the provider name and the model name: model, err := modelRepo.GetModel("gemini", "gemini-3-flash-preview") if err != nil { // handle error } Now you can access models from that provider, and generate text: model, err := geminiProvider.Model("gemini-3-flash-preview") if err != nil { // handle error } response, err := model.Generate(context.Background(), ai.AIRequest{ Prompt: ai.Prompt{ System: "You are a helpful assistant.", Prompt: "What is the capital of France?", }, MaxTokens: 100, }) 🔌 Implement Your Own Provider Currently, the library includes Gemini and Mistral implementations. Gemini uses the official go-genai library, and Mistral uses direct HTTP calls to the Mistral API. But you can implement your own provider by implementing the Provider and Model interfaces defined in the ai package. Provider Implementation: type MyProvider struct { // any configuration fields you need, e.g. API key } func (p *MyProvider) Name() string { return "myprovider" } func (p *MyProvider) Model(name string) (ai.Model, error) { // return a model implementation based on the name } func (p *MyProvider) ListModels() ([]string, error) { // return a list of available model names } func (p *MyProvider) Validate() error { // validate the provider configuration, e.g. check API key is set } Model Implementation: type MyModel struct { // any configuration fields you need, e.g. model name, provider reference name string } func (m *MyModel) Name() string { return m.name } func (m *MyModel) Generate(ctx context.Context, req ai.AIRequest) (*ai.AIResponse, error) { // implement the logic to call your model API and return the response } func (m *MyModel) Close() error { // clean up any resources if needed } Now you can use your custom provider just like the built-in ones To build an agent with tools, use the loop package: Tip User a alias for the context package to avoid conflicts with context package from the standard library. For example: import aicontext "github.com/lace-ai/gai/context" agentLoop := loop.New( model, // the model you want to use []loop.Tool{myTool}, // any tools you want to provide, one echo too is included for testing "What is the weather in New York?", // initial (user) prompt "You are a helpful assistant that can call tools to get information.", // system prompt nil, // optional context builder, if nil the loop will render prior messages itself nil, // optional tool response preprocessor, if nil the loop will append tool results as-is ) err := agentLoop.Loop(context.Background()) if err != nil { // handle error } messages := agentLoop.Messages() // get final conversation messages, including tool calls and responses var builder strings.Builder aicontext.RenderMessages(messages, &builder) fmt.Println(builder.String()) // render the messages for display 🧩 Implement Your Own Tool To implement your own tool, create a struct that implements the `Tool` interface:type myToolArgs struct { Query string `json:"query"` } type MyTool struct { // any configuration fields you need } func (t *MyTool) Name() string { return "my_tool" } func (t *MyTool) Description() string { return "A tool that does something useful." } func (t *MyTool) Params() string { return `{"type":"object","required":["query"],"properties":{"query":{"type":"string","description":"Search query"}}}` } func (t *MyTool) Function(req *loop.ToolRequest) (*loop.ToolResponse, error) { var args myToolArgs if err := loop.DecodeToolArgs(req, &args); err != nil { return nil, err } // implement your tool logic here using args.Query return &loop.ToolResponse{Text: "result for: " + args.Query}, nil } Then include an instance of your tool in the loop.New(...) call. To manage conversation history and build prompts from it, use the context package: store := mySessionStore // your implementation of SessionStore (e.g. in-memory, database, etc.) sessionManager := aicontext.NewSessionManager(store, 1) // the second argument is the session ID agentLoop := loop.New( model, // the model you want to use []loop.Tool{myTool}, // any tools you want to provide "What is the weather in New York?", // initial (user) prompt "You are a helpful assistant that can call tools to get information.", // system prompt sessionManager, // session manager implements loop.ContextBuilder nil, // optional tool response preprocessor ) err := agentLoop.Loop(context.Background()) if err != nil { // handle error } 🗄️ Implement Your Own Session Store To implement your own session store, please visit the SessionStore interface and implement the required methods. ai/ Core abstractions: Provider, Model, AIRequest, AIResponse, ModelRepository ai_gemini/ Gemini provider and model implementation ai_mistral/ Mistral provider and model implementation context/ Context management: Conversation/session types, prompt loading, message rendering loop/ Agent loop, tool parsing, tool execution helpers testutil/ Mocks used by tests A provider is responsible for exposing available models and validating its own configuration. The shared interface is: type Provider interface { Name() string Model(name string) (Model, error) ListModels() ([]string, error) Validate() error } Use ModelRepository when you want to register multiple providers and look up models by name. A model generates text from an AIRequest and returns an AIResponse . type Model interface { Name() string Generate(ctx context.Context, req AIRequest) (*AIResponse, error) Close() error } ai.Prompt combines three pieces of input: System : system instructionsContext : prior conversation or external contextPrompt : the current (user) request Prompt.CombinedPrompt() concatenates those parts onto one string in that order. AIRequest currently contains: Prompt MaxTokens maxtokens are ignored by some providers, and might be removed in future versions. AIResponse returns: Text InputTokens OutputTokens Package: ai_gemini Constructor: gemini.New(apiKey string) *gemini.Provider Known model names: gemini-3-flash-preview gemini-2.5-flash gemini-3.1-flash-lite-preview gemini-2.5-flash-lite Package: ai_mistral Constructor: mistral.New(apiKey string) *mistral.Provider Known model names: mistral-small-latest mistral-medium-latest mistral-large-latest codestral-latest This library does not read environment variables automatically. Create the provider with the API key you want to use, then register it in the repository. The context package is not the standard library context package. Import it with an alias such as aicontext to avoid name collisions. import aicontext "github.com/lace-ai/gai/context" Messages have one of four roles: system user assistant tool Each message wraps a Content implementation such as text, tool calls, or tool results, (you can also implement your own). The renderer formats history as tagged blocks, which is what the loop uses when it builds context automatically. Conversation is a minimal interface used by the SessionManager to load and render message history: type Conversation interface { Messages() []Message } SessionStore is an interface, not a built-in database implementation. You provide your own store that can: - create sessions - fetch sessions and messages - add one or many messages SessionManager builds prompt context from stored history. It loads the last 5 messages for the configured session, renders them, and appends the current loop messages. Note NewSessionManager(store, id) expects an integer session ID. If you want to start a new session, create one first. LoadPromptFromFile reads .md and .txt files, trims whitespace, and returns the prompt text. The loop package is for agent-style execution where the model can request tool calls. loop.New(...) creates a loop with: - a model - optional tools - an initial user prompt - an optional system prompt - an optional context builder - an optional tool-response preprocessor If no context builder is provided, the loop renders prior messages itself. The loop stops when the model returns a normal response or when the maximum iteration count is reached. Tools must implement: type Tool interface { Name() string Description() string Params() string Function(req *ToolRequest) (*ToolResponse, error) } Tool calls are expected to arrive as JSON with this shape: { "id": "tool_name", "type": "function", "arguments": { "some": "value" } } Tip Keep tool Params() aligned with the JSON fields your Function(...) decodes through DecodeToolArgs . DetectToolCall checks whether a model response looks like a tool call.CallTool runs a tool by name.DecodeToolArgs unmarshals tool arguments into a typed struct.RenderToolSignatures formats tool metadata for prompting. Common exported errors include: ai.ErrProviderNotFound ai.ErrProviderAlreadyExists ai.ErrNilModelRepository loop.ErrModelNotConfigured loop.ErrToolNotFound loop.ErrMaxIterations context.ErrPromptMissing context.ErrSessionNotFound gemini.ErrInvalidAPIKey mistral.ErrInvalidAPIKey Handle provider and tool errors at the call site, especially when a model or session store is user-configured. To see all the errors, check the errors.go file in each package. Run all tests: go test ./... Run a package test suite: go test ./ai/... go test ./loop/... go test ./context/... - The context package name intentionally mirrors the domain it manages, but it is easy to confuse withcontext.Context from the standard library. Use an alias in imports. The context package is likely to be renamed before official1.0 release. SessionManager currently uses a fixed history window of 5 messages. Contributions are welcome! Please open an issue or submit a pull request. If you add a new provider or tool, document the new constructor, model names, and any required environment variables. This library is licensed under the GNU LESSER GENERAL PUBLIC LICENSE v2.1. See LICENSE for details. Copyright (c) 2026 lace-ai. All rights reserved.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유