경쟁 우위로서의 컨텍스트 엔지니어링

Towards Data Science | | 💼 비즈니스
#ai #ai 모델 #llm #경쟁 우위 #컨텍스트 엔지니어링 #프롬프트 엔지니어링
원문 출처: Towards Data Science · Genesis Park에서 요약 및 분석

요약

생성형 AI 시대에서 단순히 모델을 사용하는 것을 넘어, 독보적인 도메인 전문지식을 AI 시스템에 효과적으로 녹여내는 '컨텍스트 엔지니어링' 능력이 핵심 경쟁력으로 부상했습니다. 고유한 전문성을 AI가 활용 가능한 형태로 구체화하고 맥락을 설계하는 기업은 타 경쟁자들을 압도하는 차별화된 성과를 달성할 수 있습니다.

본문

Today, I would like to zoom in on context engineering — the discipline of dynamically filling the context window of an AI model with information that maximizes its chances of success. Context engineering allows you to encode and pass on your existing expertise and domain knowledge to an AI system, and I believe it is an important component for strategic differentiation. If you have both unique domain expertise and know how to make it usable to your AI systems, you’ll be hard to beat. In this article, I will summarize the components of context engineering as well as the best practices that have established themselves over the past year. One of the most critical factors for success is a tight handshake between domain experts and engineers. Domain experts are needed to encode domain knowledge and workflows, while engineers are responsible for knowledge representation, orchestration, and dynamic context construction. In the following, I attempt to explain context engineering in a way that is helpful to both domain experts and engineers. Thus, we will not dive into technical topics like context compacting and compression. For now, let’s assume our AI system has an abstract component — the context builder — which assembles the most efficient context for every user interaction. The context builder sits between the user request and the language model executing the request. You can think of it as an intelligent function that takes the current user query, retrieves the most relevant information from external resources, and assembles the optimal context for it. After the model produces an output, the context builder may also store new information, like user edits and feedback. In this way, the system accumulates continuity and experience over time. Conceptually, the context builder must manage three distinct resources: - Knowledge about the domain and specific tasks turns a generic AI system into a domain expert. - Tools allow the agent act in the real world. - Memory allows the agent to personalize its actions and learn from user feedback. As the system matures, you will also find more and more interesting interdependencies between these three components, which can be addressed with proper orchestration. Let’s dive in and examine these components one by one. We will illustrate them using the example of an AI system that supports RevOps tasks such as weekly forecasts. Knowledge As you begin designing your system, you speak with the Head of RevOps to understand how forecasting is currently done. She explains: “When I prepare a forecast, I don’t just look at the pipeline. I also need to understand how similar deals performed in the past, which segments are trending up or down, whether discounting is increasing, and where we historically overestimated conversion. Sometimes, that information is already top-of-mind, but often, I need to search through our systems and talk to salespeople. In any case, the CRM snapshot alone is only a baseline.” LLMs come with extensive general knowledge from pre-training. They understand what a sales pipeline is and know common forecasting methods. However, they are not aware of your company’s specifics, such as: - Historical close rates by stage and segment - Average time-in-stage benchmarks - Seasonality patterns from comparable quarters - Pricing and discount policies - Current revenue targets - Definitions of pipeline stages and probability logic Without this information, users will have to manually adjust the system’s outputs. They will explain that enterprise deals slip more often in Q4, correct expansion assumptions, and remind the model that discount approvals are currently delayed. Soon, they might conclude that the AI system is interesting in itself, but not viable for their day-to-day. Let’s look at patterns that allow you to integrate an AI model with company-specific knowledge. We will start with RAG (Retrieval-Augmented Generation) as the baseline and progress towards more structured representations of knowledge. RAG In Retrieval-Augmented Generation (RAG), company- and domain-specific knowledge is broken into manageable chunks (refer to this article for an overview of chunking methods). Each chunk is converted into a text embedding and stored in a database. Text embeddings represent the meaning of a text as a numerical vector. Semantically similar texts are neighbours in the embedding space, so the system can retrieve “relevant” information through similarity search. Now, when a forecasting request arrives, the system retrieves the most similar text chunks and includes them in the prompt: Conceptually, this is elegant, and every freshly baked B2B AI team that respects itself has a RAG initiative underway. However, most prototypes and MVPs struggle with adoption. The naive version of RAG makes several oversimplifying assumptions about the nature of enterprise knowledge. It uses isolated text fragments as a source of truth. It assumes that documents are internally consistent. I

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →