컨텍스트 사용 – 데이터 내보내기를 휴대용 AI 메모리로 전환
hackernews
|
|
📦 오픈소스
#ai 딜
#ai 메모리
#chatgpt
#claude
#gpt-4
#openai
#python
#데이터 내보내기
#컨텍스트 사용
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
오픈소스 도구인 ‘컨텍스트 사용(Context Use)’을 활용하면 ChatGPT나 Instagram 등의 데이터를 자동으로 분석하여 개인화된 휴대용 AI 메모리로 구축할 수 있습니다. 이 도구는 프록시를 통해 모든 대화를 백그라운드에서 메모리로 저장하고, 향후 질문에 이 맥락을 자동으로 반영하여 더욱 개인적인 답변을 제공합니다. 또한 SQLite를 기반으로 의미론적 검색을 지원하며, 기존 ASGI 서버에 손쉽게 통합하여 다양한 플랫폼의 대역광 데이터를 효율적으로 관리할 수 있습니다.
본문
Portable AI memory from your conversations and data exports. Make your answers to the human feel truly personal — use the skill at SKILL.md. pip install context-use # or uv tool install context-use Start the proxy and point any OpenAI-compatible client at it. Every conversation is automatically turned into memories. context-use proxy --upstream-url https://api.openai.com With --upstream-url , the proxy always forwards requests to that upstream URL, so your client only needs to talk to the local proxy: from openai import OpenAI client = OpenAI( base_url="http://localhost:8080/v1", api_key="", ) client.chat.completions.create(model="gpt-4o", messages=[...]) If you omit --upstream-url , the proxy uses the request Host header instead. Note Only POST /v1/chat/completions and POST /v1/responses requests are enriched with memories. All other paths are forwarded transparently without modification. Memories are generated in the background from each conversation and are used to automatically enrich future requests that flow through the proxy. If you already have your own ASGI server (FastAPI, Starlette, etc.), you can simply mount create_proxy_app : from context_use import ContextUse, ContextProxy, create_proxy_app ctx = ContextUse(storage=..., store=..., llm_client=...) await ctx.init() handler = ContextProxy(ctx) asgi_app = create_proxy_app(handler) Bulk-import memories from your data exports. Use this to bootstrap your memory store with historical data. context-use pipeline --quick The quickstart mode uses the real-time API of the LLM provider — fast for small slices but susceptible to rate limits on large exports. Use the Full pipeline to process the complete data export without incurring in rate limits. For full data export and cost-efficient batch processing. context-use pipeline Ingests the export and generates memories via the batch API of the LLM provider — significantly cheaper and more rate-limit-friendly than the real-time API used by quickstart. Typical runtime: 2–10 minutes. Memories are stored in SQLite and persist across sessions, enabling semantic search and the Personal agent. context-use memories list context-use memories search "hiking trips in 2024" context-use memories export To ingest several archives without running the full pipeline each time, use ingest to parse them individually, then generate memories in one go: context-use ingest chatgpt-export.zip context-use ingest instagram-export.zip context-use memories generate A multi-turn agent that operates over your full memory store. context-use agent synthesise # generate higher-level pattern memories context-use agent profile # compile a first-person profile context-use agent ask "What topics do I keep coming back to across all my conversations?" context-use config --help The configuration is saved in a config file at /.config/context-use/config.toml . - Follow the export guide for your provider in the supported providers table. The export is delivered as a ZIP file — do not extract it. - Move or copy the ZIP into context-use-data/input/ : context-use-data/ └── input/ └── your-data-export.zip ← place it here | Provider | Status | Data types | Export guide | |---|---|---|---| | ChatGPT | Available | Conversations | Export your data | | Claude | Available | Conversations | Export your data | | Available | Media, Likes, DMs, Ads, Comments, Saved, Profile Searches, ... | Export your data | | | Available | Searches, YouTube, Shopping, Lens, Discover | Export your data | | | Netflix | Available | Viewing Activity, Search History, Ratings, My List, Messages, Preferences | Export your data | | Airbnb | Available | Wishlists, Search History, Reviews, Reservations, Messages | Export your data | Want another provider? Contribute it by pointing your coding agent to the Adding a Data Provider guide.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유