Ollama가 제공하는 로컬 최초 다중 에이전트 시뮬레이션 및 예측 엔진

hackernews | | 📰 뉴스
#llama #openai #오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

로컬 우선 방식의 멀티 에이전트 시뮬레이션 및 예측 엔진은 Ollama를 기반으로 작동하며, npm을 통해 프론트엔드와 백엔드를 실행할 수 있습니다. Docker Compose를 이용해 3000번과 5001번 포트로 접속 가능한 환경을 쉽게 구축할 수 있으며, 비로컬 LLM 엔드포인트 사용 시도 지원됩니다.

본문

Local-first multi-agent simulation and prediction engine. This project is a derivative work of: - Upstream: https://github.com/666ghj/MiroFish.git - Target repository: https://github.com/oswarld/mirollama This repository is tuned for the easiest onboarding path: - Run a local Ollama model - Clone the repo - Install dependencies - Start both services No paid API key is required for the default local setup. - Frontend UI (Vite + Vue) for graph build, simulation, report, and interaction - Flask backend APIs for simulation workflows - Local-first LLM execution through Ollama's OpenAI-compatible endpoint - Optional search providers ( none ,searxng ,zep ) - Frontend: frontend (Vite dev server, default port3000 ) - Backend: backend (Flask API, default port5001 ) - Root scripts: install and run frontend/backend together - Shared env file: root .env (loaded by backend) Install these first: - Node.js >=18 - Python >=3.11 uv (Python package/dependency runner)- Ollama (running locally) Quick checks: node -v python --version uv --version ollama --version git clone https://github.com/oswarld/mirollama.git mirollama cd mirollama Use one model that exists in .env.example : gpt-oss:120b gpt-oss:20b gemma4:31b gemma4:26b Example: ollama pull gpt-oss:20b Ollama endpoint expected by default: http://localhost:11434/v1 Copy defaults: cp .env.example .env Default mode is fully local/offline-friendly: LLM_BASE_URL=http://localhost:11434/v1 SEARCH_PROVIDER=none LLM_API_KEY can stay unset for local Ollama Only change LLM_MODEL_NAME if you pulled a different model tag. One command: npm run setup:all Equivalent step-by-step: npm run setup npm run setup:backend npm run dev Services: - Frontend: http://localhost:3000 - Backend API: http://localhost:5001 - Health check: http://localhost:5001/health Set in .env : SEARCH_PROVIDER=searxng SEARXNG_BASE_URL=http://localhost:8080 WEB_SEARCH_LANGUAGE=ko-KR WEB_SEARCH_LIMIT=10 Set in .env : SEARCH_PROVIDER=zep ZEP_API_KEY=your_zep_api_key_here ZEP_API_KEY is required only when SEARCH_PROVIDER=zep . npm run backend npm run frontend The repository includes docker-compose.yml : cp .env.example .env docker compose up -d Published ports: 3000 (frontend)5001 (backend) Install uv , then rerun: npm run setup:backend You are likely using a non-local LLM endpoint. - For Ollama: keep LLM_BASE_URL ashttp://localhost:11434/v1 - For cloud endpoint: set LLM_API_KEY in.env Usually model tag mismatch. - Check your pulled models: ollama list - Ensure .env LLM_MODEL_NAME matches exactly Free ports 3000 / 5001 , or override: - Backend: FLASK_PORT= - Frontend API target: VITE_API_BASE_URL=http://localhost: - Frontend: Vue 3, Vite, Vue Router, Vue i18n, Axios, D3 - Backend: Flask, OpenAI SDK-compatible client, CAMEL/OASIS dependencies - Runtime model provider: Ollama (default), or any OpenAI-compatible API - This project is licensed under the MIT License (see LICENSE ). - This repository is a derivative work based on 666ghj/MiroFish . - Upstream repository: https://github.com/666ghj/MiroFish.git - Current repository: https://github.com/oswarld/mirollama - Derivative notices and attribution details: NOTICE - Keep root .env as the single source for runtime config - Preserve local-first defaults unless explicitly changing product direction - If you change setup scripts or env keys, update this README in the same PR

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →