HN 표시: A2A Utils – A2A 서버를 위한 포괄적인 유틸리티 기능 세트
hackernews
|
|
📦 오픈소스
#a2a
#ai 모델
#show hn
#서버
#유틸리티
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
약 1년간의 실제 운영 경험을 바탕으로 A2A 서버 활용을 극대화하는 'A2A Utils' 패키지가 새롭게 공개되었습니다. 이 패키지는 A2AAgents, A2ASession, A2ATools라는 세 가지 주요 클래스를 도입하여 LLM이 에이전트 카드 URL 노출 없이도 메시지를 전송할 수 있도록 지원합니다. 또한, 컨텍스트 윈도우 과부하를 방지하기 위해 아티팩트를 자동으로 최소화하고, 스트리밍 미지원 시 폴링으로 대체하며, 작업 및 파일을 로컬에 저장하는 등의 핵심 기능을 제공합니다. 현재 파이썬과 자바스크립트 환경을 모두 지원하는 이 패키지는 단 몇 줄의 코드만으로 정교한 A2A 클라이언트 에이전트를 신속하게 구축할 수 있게 해줍니다.
본문
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert Today I’m releasing the A2A Utils package, a comprehensive set of utility functions for using A2A servers (remote agents). It has been three weeks in the making, and is based on nearly a year’s experience using A2A servers in production. The A2A MCP Server and OpenClaw A2A Plugin are wrappers around it, and it will power other packages in the near future. I highly encourage you to read through the documentation for 30 minutes. I promise it will save you at least 10x that time! The package introduces three new classes that make it trivial to use A2A servers: A2AAgents, A2ASession, and A2ATools. The core, high-level contributions of the package are: The concept of an agent ID that can be used by LLMs to send messages without exposing Agent Card URLs and headers to it. LLM-friendly methods (e.g. get_agents_for_llm) and LLM-friendly types (e.g. TaskForLLM) for LLMs to interact with A2A servers and their responses. Automatic minimisation of Artifacts that would overload the context window, and tools to interact with them (e.g. view_data_artifact). Client-side saving of Tasks and Artifacts. A JSONTaskStore that saves Tasks and Artifacts as JSON files. The FileStore, to automatically save Base64 encoded and downloadable URL FileParts. send_message and get_task functions that automatically send non-blocking messages, attempt to stream the response, and fall back to polling if the agent doesn’t support streaming. For a detailed explanation of the design decisions, considerations, and the package’s approach, see Design Decisions. The best way to understand the package is with code examples. Here’s one of most common operations in A2A: sending a message and returning the response if it’s a Message, or polling the Task until it’s in a non-terminal state. In the A2A Utils example, A2AAgents automatically fetches the Agent Card, send_message sends the message as non-blocking, streams the response (falling back to polling if the agent doesn’t support streaming), and returns a Message or Task. If it times out, get_task can be used to continue streaming (or polling). Note that the A2A SDK example above is simplified, because it doesn’t check if the agent supports streaming. It also doesn’t save Tasks, Artifacts, and files locally. The time savings are even more pronounced when building agents. A2ATools defines 6 LLM-friendly tools based on Writing effective tools for AI agents. The tools return LLM-friendly types that are subsets of A2A types (e.g. TaskForLLM, MessageForLLM, and ArtifactForLLM). They also automatically minimise Artifacts that would overload the context window, and save Tasks, Artifacts, and files locally. You can create an A2A client agent that can send and receive text, data, and files to one or more A2A servers in only a few lines of code: A2ATools uses A2ASession under-the-hood. The key differences are that A2ATools has agent-specific tools, LLM-friendly docstrings and types, return types that are JSON-serialisable, and error messages that are actionable. A2ATools is great for creating a sophisticated A2A client agent quickly, but if you’d prefer to write your own tools, add tools, etc. you can use A2AAgents and A2ASession as the foundation. The package is available in Python and JavaScript: I would love to hear your thoughts and welcome feedback. Please leave a comment and connect with me. If you’d like to be part of an A2A community, join our Discord: https://discord.gg/674NGXpAjU reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji Uh oh! There was an error while loading. Please reload this page. Today I’m releasing the A2A Utils package, a comprehensive set of utility functions for using A2A servers (remote agents). It has been three weeks in the making, and is based on nearly a year’s experience using A2A servers in production. The A2A MCP Server and OpenClaw A2A Plugin are wrappers around it, and it will power other packages in the near future. I highly encourage you to read through the documentation for 30 minutes. I promise it will save you at least 10x that time! The package introduces three new classes that make it trivial to use A2A servers: A2AAgents ,A2ASession , andA2ATools . The core, high-level contributions of the package are:get_agents_for_llm ) and LLM-friendly types (e.g.TaskForLLM ) for LLMs to interact with A2A servers and their responses.view_data_artifact ).JSONTaskStore that saves Tasks and Artifacts as JSON files.FileStore , to automatically save Base64 encoded and downloadable URLFilePart s.send_message andget_task functions that automatically send non-blocking messages, attempt to stream the response, and fall back to polling if the agent doesn’t support streaming.For a detailed explanation of the design decisions, considerations, and the package’s approach, see Design Decisions. The best way to understand the package is with code examples. Here’s one of most common operations in A2A: sending a message and returning the response if it’s a Message, or polling the Task until it’s in a non-terminal state. A2A SDK: A2A Utils: In the A2A Utils example, A2AAgents automatically fetches the Agent Card,send_message sends the message as non-blocking, streams the response (falling back to polling if the agent doesn’t support streaming), and returns aMessage orTask . If it times out,get_task can be used to continue streaming (or polling). Note that the A2A SDK example above is simplified, because it doesn’t check if the agent supports streaming. It also doesn’t save Tasks, Artifacts, and files locally.The time savings are even more pronounced when building agents. A2ATools defines 6 LLM-friendly tools based on Writing effective tools for AI agents. The tools return LLM-friendly types that are subsets of A2A types (e.g.TaskForLLM ,MessageForLLM , andArtifactForLLM ). They also automatically minimise Artifacts that would overload the context window, and save Tasks, Artifacts, and files locally.You can create an A2A client agent that can send and receive text, data, and files to one or more A2A servers in only a few lines of code: A2ATools usesA2ASession under-the-hood. The key differences are thatA2ATools has agent-specific tools, LLM-friendly docstrings and types, return types that are JSON-serialisable, and error messages that are actionable.A2ATools is great for creating a sophisticated A2A client agent quickly, but if you’d prefer to write your own tools, add tools, etc. you can useA2AAgents andA2ASession as the foundation.The package is available in Python and JavaScript: I would love to hear your thoughts and welcome feedback. Please leave a comment and connect with me. If you’d like to be part of an A2A community, join our Discord: https://discord.gg/674NGXpAjU Beta Was this translation helpful? Give feedback. All reactions
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유