임베딩 없이 지식 에이전트 구축
hackernews
|
|
💼 비즈니스
#ai sdk
#tip
#vercel
#에이전트
#임베딩
#지식 에이전트
요약
기존 벡터 데이터베이스 기반의 검색 방식이 갖는 한계를 극복하기 위해, Vercel은 파일 시스템과 bash 도구를 활용하는 'Knowledge Agent Template'을 오픈 소스로 공개했습니다. 이 접근 방식은 복잡한 임베딩 파이프라인 없이 Vercel Sandbox 내에서 grep과 find 명령어를 통해 데이터를 검색하도록 하여, 비용을 75% 절감하고 정확도를 높이는 성과를 거두었습니다. 또한 Chat SDK를 통해 단일 코드베이스로 Slack, Discord, GitHub 등 다양한 플랫폼에서 동시에 작동하는 확장 가능한 에이전트를 구축할 수 있습니다.
왜 중요한가
개발자 관점
검토중입니다
연구자 관점
검토중입니다
비즈니스 관점
검토중입니다
본문
5 min read Deploy an agent with Vercel Sandbox, Chat SDK, and AI SDK Most knowledge agents start the same way. You pick a vector database, then build a chunking pipeline. You choose an embedding model, then tune retrieval parameters. Weeks later, your agent answers a question incorrectly, and you have no idea which chunk it retrieved or why that chunk scored highest. We kept seeing this pattern internally and for teams building agents on Vercel. The embedding stack works for semantic similarity, but it falls short when you need a specific value from structured data. The failure mode is silent: the agent confidently returns the wrong chunk, and you can't trace the path from question to answer. That's why we tried something different. We replaced our vector pipeline with a filesystem and gave the agent bash . Our sales call summarization agent went from ~$1.00 to ~$0.25 per call, and the output quality improved. The agent was doing what it already knew how to do: read files, run grep , and navigate directories. So we open-sourced the Knowledge Agent Template, a production-ready version of this architecture built on Vercel. Link to headingWhat the template does The Knowledge Agent Template is an open source, file-system-based agent you can fork, customize, and deploy. Plug any source: GitHub repos, YouTube transcripts, documents (e.g., markdown files), or custom APIs. Ship it as a web chat app, a GitHub bot, a Discord bot, or all three at once. The template is built on Vercel Sandbox, AI SDK, and Chat SDK. Deploy to Vercel in a single click, configure your sources, and start answering questions. Link to headingFile-based search with Vercel Sandbox No vector database. No chunking pipeline. No embedding model. Your agent uses grep , find , and cat inside of isolated Vercel Sandboxes. Here's how it works: You add sources through the admin interface, and they're stored in Postgres Content syncs to a snapshot repository via Vercel Workflow When the agent needs to search, a Vercel Sandbox loads the snapshot The agent's bash andbash_batch tools execute file-system commandsThe agent returns an answer with optional references Results are deterministic, explainable, and fast. When the agent gives a wrong answer, you open the trace and see: it ran grep -r "pricing" docs/ , read docs/plans/enterprise.md , and pulled the wrong section. You fix the file or adjust the agent's search strategy. The whole debugging loop takes minutes. Compare that to vectors. If the agent returns a bad chunk, you have to determine which chunk it retrieved, then figure out why it scored 0.82 and the correct one scored 0.79. The problem could be the chunking boundary, the embedding model, or the similarity threshold. With filesystem search, there is no guessing why it picked that chunk and no tuning retrieval scores in the dark. You're debugging a question, not a pipeline. LLMs already understand filesystems. They've been trained on massive amounts of code: navigating directories, grepping through files, managing state across complex codebases. If agents excel at filesystem operations for code, they excel at them for anything. That's the insight behind the filesystem and bash approach. You're not teaching the model a new skill; you're using the one it's best at. No embedding pipeline to maintain or vector DB to scale. Add a source, sync, and search. Link to headingChat SDK: one agent, every platform Your agent has one knowledge base, one codebase, and one source of truth. Yet your engineers are scattered across Slack, your community spread across Discord, your bug reports buried in GitHub. A single agent that understands all three. Chat SDK connects your knowledge agent to every platform your users are on. Import the adapters you need, point each one to the same agent pipeline, and your agent is live in any Chat SDK-supported platform. Chat SDK Knowledge Agent example Each adapter handles platform-specific concerns (e.g., authentication, event formats, messaging) while the agent itself stays unchanged. onNewMention fires whenever the bot is mentioned, regardless of platform. The agent receives the message text, streams a response through the same filesystem-backed pipeline, and posts back to the thread. import { Chat } from "chat";import { createSlackAdapter } from "@chat-adapter/slack";import { createDiscordAdapter } from "@chat-adapter/discord";import { createRedisState } from "@chat-adapter/state-redis"; const bot = new Chat({ userName: "knowledge-agent", adapters: { slack: createSlackAdapter(), discord: createDiscordAdapter(), }, state: createRedisState(),}); bot.onNewMention(async (thread, message) => { await thread.subscribe(); const result = await agent.stream({ prompt: message.text }); await thread.post(result);}); The template ships with GitHub and Discord adapters out of the box, and Chat SDK already supports Slack, Microsoft Teams, Google Chat, and more. See the adapter directory for a full list of official and community adapters. Link