로컬 커서 - Ollama를 사용하여 컴퓨터에서 실행되는 로컬 AI 에이전트

hackernews | | 📦 오픈소스
#ai 딜 #ai 코딩 #llama #ollama #openai #개인정보 보호 #로컬 ai #터미널 에이전트
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

'로컬 커서'는 Ollama를 통해 터미널 내에서 완전히 로컬하게 실행되는 오픈 소스 AI 코딩 보조 도구로, 사용자의 코드를 외부 서버로 전송하지 않아 보안을 보장합니다. 자연어 요청을 기반으로 파일 읽기 및 쓰기, 화이트리스트 기반의 쉘 명령 실행, 검색 기능 등을 수행하며 기본적으로 'qwen3:32b' 모델을 탑재하고 있습니다. 사용자는 Python 3.8 이상과 Ollama가 설치된 환경에서 해당 저장소를 클론하여 가상 환경을 설정한 뒤, 필요에 따라 다른 Ollama 모델로 교체하거나 새로운 도구를 확장하여 사용할 수 있습니다.

본문

Privacy‑first AI coding assistant that lives entirely in your terminal. No clouds, no compromises. Local Cursor wraps a local large‑language‑model (LLM) served by Ollama in a simple CLI, giving you Copilot‑style help without ever sending a byte of your code to external servers. | Capability | What it means | |---|---| | File tooling | Read, write, list & search files via natural‑language requests | | Shell commands | Safely run whitelisted commands (ls , grep , find , …) | | Web search | Optional Exa API integration for up‑to‑date answers | | Local LLM | Ships with qwen3:32b by default—swap in any Ollama model | | Extensible | Add new tools, commands or UI layers without touching model code | - Python ≥ 3.8 - Ollama running locally ( brew install ollama or see the docs) - (Optional) Exa API key for web search # Clone and enter the repo git clone https://github.com/towardsai/local-cursor.git cd local‑cursor # Set up a virtualenv python -m venv .venv && source .venv/bin/activate # Install Python deps pip install -r requirements.txt # Pull a model & start Ollama ollama pull qwen3:32b ollama serve # keep this terminal running # (Optional) add your Exa key echo "EXA_API_KEY=sk‑..." > .env # Fire up the assistant python main.py --model qwen3:32b Tip → run python main.py --help for all CLI flags (debug mode, model override, …). - Natural language input from the terminal is sent to the model, along with a system prompt that lists the available tools. - The local LLM (via Ollama) analyzes the request and responds with either a plain-text answer or a structured tool call. - If a tool call is issued, the OllamaAgent executes the corresponding function (e.g., read/write a file, run a shell command) and sends the result back to the model. - This loop continues until the model produces a final answer, which is then printed in your terminal. All logic is in main.py ; the heavy lifting is done by the open‑source model running locally. | Tool | What it does | |---|---| list_files(path=".") | Show directories and files with human‑friendly icons | read_file(path) | Return the full text of a file | write_file(path, content) | Create or overwrite a file | find_files(pattern) | Glob search (e.g. **/*.py ) | run_command(cmd) | Execute a whitelisted shell command | web_search(query, num_results=5) | Query the web via Exa | Add your own by editing get_tools_definition() —the model will “see” them automatically at runtime. Local Cursor keeps its runtime lean: requests # To make EXA API calls click # To write CLI colorama # To format CLI output openai # To create an OpenAI client python‑dotenv # To load environment variables from our .env file (See requirements.txt for exact versions.) - Only a safe subset of shell commands is allowed by default—edit allowed_commands inrun_command() to adjust. - All file paths are resolved inside the current working directory to avoid accidental system‑wide access. | Symptom | Fix | |---|---| Error: Exa API key not configured | Add EXA_API_KEY to .env or disable web search | command … not allowed for security reasons | Add it to allowed_commands (understand the risks first!) | | High memory usage | Try a smaller Ollama model like phi3:4b | - Fork & clone - Create a virtualenv and install dev dependencies ( pip install -r requirements-dev.txt ) - Create a PR

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →