HN 표시: 공개 에이전트 사양. 프롬프트 체인이 아닌 입력된 함수처럼 AI 에이전트를 처리합니다.
hackernews
|
|
📦 오픈소스
#ai 모델
#ai 에이전트
#gpt-4
#openai
#yaml
#스키마 검증
#오픈 에이전트 스펙
#프롬프트 엔지니어링
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
'Open Agent Spec'는 AI 에이전트를 YAML 기반의 명세서로 정의하여, OpenAPI나 Terraform처럼 인프라로 관리할 수 있는 도구입니다. 이 툴은 입력과 출력을 스키마로 검증하고, 프롬프트와 모델 설정을 분리하여 코드를 생성하지 않고도 에이전트를 직접 실행할 수 있는 환경을 제공합니다. 사용자는 'oa' CLI 명령어를 통해 스펙을 검증하거나, 파이썬 스캐폴드를 생성하여 에이전트의 구현을 효율화하고 모듈화할 수 있습니다.
본문
Define AI agents as contracts, not scattered prompts. Open Agent Spec lets you define an agent once in YAML, validate inputs and outputs against a schema, and either run it directly with oa run or generate a Python scaffold with oa init . Most agent systems are hard to reason about: - outputs are not strictly typed - behaviour is buried in prompts - logic is split across Python, Markdown, and framework abstractions - swapping models often breaks things in subtle ways Open Agent Spec treats an agent like infrastructure. Think OpenAPI or Terraform, but for AI agents. You define: - input schema - output schema - prompts - model configuration Then OA enforces the boundary: input -> LLM -> validated output If the output does not match schema, the task fails fast with a validation error. For example, this shape mismatch can silently break downstream systems: {"msg":"hello"} instead of: {"response":"hello"} Install (Python 3.10+): pipx install open-agent-spec oa init aac oa validate aac export OPENAI_API_KEY=your_key_here oa run --spec .agents/example.yaml --task greet --input '{"name":"Alice"}' --quiet With OA you can: - define tasks, prompts, model config, and expected I/O in YAML - run a spec directly without generating code first - keep .agents/*.yaml in your repo and call them from CI - generate a Python project scaffold when you want to customize implementation Shortest path from install to a working agent: 1. Create the agents-as-code layout (aac = repo-native .agents/ directory): oa init aac This creates: .agents/ ├── example.yaml # minimal hello-world spec ├── review.yaml # code-review agent that accepts a diff file ├── change.diff # sample diff for immediate review-agent testing └── README.md # quick usage notes 2. Validate the generated specs: oa validate aac 3. Set an API key for the engine in your spec (OpenAI by default): export OPENAI_API_KEY=your_key_here 4. Run the example agent: oa run --spec .agents/example.yaml --task greet --input '{"name":"Alice"}' --quiet --quiet prints the task output JSON only, good for piping to jq or scripting: { "response": "Hello Alice!" } Omit --quiet for the full execution envelope with Rich formatting. 5. Run the review agent with the bundled sample diff: oa run --spec .agents/review.yaml --task review --input .agents/change.diff --quiet Or review your own change: git diff > change.diff oa run --spec .agents/review.yaml --task review --input change.diff --quiet Start from this shape: open_agent_spec: "1.3.0" agent: name: hello-world-agent role: chat intelligence: type: llm engine: openai model: gpt-4o tasks: greet: description: Say hello to someone input: type: object properties: name: type: string required: [name] output: type: object properties: response: type: string required: [response] prompts: system: > You greet people by name. user: "{{ name }}" Validate first, then run: oa validate --spec agent.yaml oa run --spec agent.yaml --task greet --input '{"name":"Alice"}' --quiet If you want editable generated code instead of running the YAML directly: oa init --spec agent.yaml --output ./agent Generated structure: agent/ ├── agent.py ├── models.py ├── prompts/ ├── requirements.txt ├── .env.example └── README.md Most agent projects end up hand-rolling the same pieces: - prompt templates - model configuration - task definitions - routing glue - runtime wrappers OA moves those concerns into a declarative spec so they can be reviewed, versioned, and reused. The intended model is: - spec defines the agent contract oa run executes the spec directlyoa init generates a starting implementation when you need code- external systems can orchestrate multiple specs however they want OA deliberately does not prescribe: - orchestration - evaluation - governance - long-running runtime architecture | Command | Purpose | |---|---| oa init aac | Create .agents/ with starter specs | oa validate aac | Validate all specs in .agents/ | oa validate --spec agent.yaml | Validate one spec | oa run --spec agent.yaml --task greet --input '{"name":"Alice"}' --quiet | Run one task directly from YAML | oa init --spec agent.yaml --output ./agent | Generate a Python scaffold | oa update --spec agent.yaml --output ./agent | Regenerate an existing scaffold | | Resource | Contents | |---|---| | openagentspec.dev | Project website | | docs/REFERENCE.md | Spec structure, engines, templates, .agents/ usage | | Repository | Source, issues, workflows | - The CLI command is oa (notoas ). - Python 3.10+ is required. oa run requires the relevant provider API key for the engine in your spec. - OA Open Agent Spec was dreamed up by Andrew Whitehouse in late 2024, with a desire to give structure and standardiasation to early agent systems - In early 2025 Prime Vector was formed taking over the public facing project MIT | see LICENSE.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유