HN 표시: Finsight – 개인 정보 보호 우선, AI 신용 카드 및 은행 명세서 분석기
hackernews
|
|
🔬 연구
#ai 분석
#llama
#mistral
#review
#개인금융
#개인정보보호
#로컬 ai
#명세서 분석
#ai
#명세서분석
#오프라인ai
#재무분석
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
프라이버시를 최우선으로 고려한 AI 금융 분석기 'Finsight'가 공개되었습니다. 이 도구는 클라우드나 로그인 없이 사용자 기기 내에서 100% 로컬로 실행되어 보안이 강화되었으며, PDF나 엑셀 형식의 금융 내역을 업로드하면 AI가 거래 내역을 추출하고 분류합니다. 또한, 대화형 대시보드와 소비 분석, 정기 결제 탐지 기능을 제공하여 사용자가 자신의 데이터를 쉽게 파악하고 관리할 수 있습니다.
본문
AI-powered personal finance analyzer — runs 100% locally, no cloud, no login. Upload a PDF / CSV / Excel bank or credit card statement → AI extracts and categorizes every transaction → interactive dashboard, spending insights, recurring payment detection & chat with your data. | Feature | Description | |---|---| | AI Parsing | Uses a local LLM (via Ollama or LM Studio) to extract transactions from PDFs with near-perfect accuracy | | Any Model | Works with any model — Gemma, Llama, Mistral, Phi, Qwen, DeepSeek, etc. | | Multiple Providers | Choose between Ollama or LM Studio — whichever fits your workflow | | Auto Currency | AI detects the currency from your statement automatically | | Smart Categorization | AI-powered transaction categorization with confidence scores and review flags | | Credit Card Support | Parses credit card statements, detects international transactions, and tracks card-wise spending | | Recurring Payments | Automatically detects subscriptions and recurring payments — spot forgotten subscriptions | | Financial Insights | Spending trends, category breakdowns, and financial health indicators | | Dashboard | Pie charts, trend lines, income vs expense breakdowns | | Chat | Ask questions about your statement in natural language ("What was my highest expense?") | | Budget | Plan next month's budget based on spending patterns | | Privacy | Everything stays on your machine — no data leaves your browser + local LLM | | Multi-format | PDF, CSV, XLS, XLSX supported | You need to have Node JS and NPM installed on your system. Download or install from the official NodeJS website Download a pre-built binary from [NodeJS Downloads](https://nodejs.org/en/download) OR Install using Docker # Docker has specific installation instructions for each operating system. # Please refer to the official documentation at https://docker.com/get-started/ # Pull the Node.js Docker image: docker pull node:24-alpine # Create a Node.js container and start a Shell session: docker run -it --rm --entrypoint sh node:24-alpine # Verify the Node.js version: node -v # Should print "v24.14.0". # Verify npm version: npm -v # Should print "11.9.0". # Download and install Homebrew curl -o- https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh | bash # Download and install Node.js: brew install node@24 # Download and install Homebrew curl -o- https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh | bash # Download and install Node.js: brew install node@24 Choose your LLM provider: Ollama or LM Studio. Both work equally well. Download from ollama.com or: # macOS brew install ollama # Linux curl -fsSL https://ollama.com/install.sh | sh Pick any model you like. Smaller models are faster; larger models are more accurate. # Small & fast (recommended for parsing) ollama pull gemma3:1b ollama pull llama3.2:1b ollama pull phi4-mini # Medium (good balance) ollama pull gemma3:4b ollama pull llama3.2:3b ollama pull mistral # Large (most accurate) ollama pull llama3.1:8b ollama pull gemma3:12b ollama pull qwen2.5:7b ollama serve Ollama runs on http://localhost:11434 by default. Download from lmstudio.ai - Open LM Studio - Go to the Search tab - Download any model you like (Gemma, Llama, Mistral, Phi, Qwen, etc.) - Go to Developer tab (or press Ctrl+D /Cmd+D ) - Click Start Server (default port: 1234) - Important: Go to Settings → Developer and enable CORS (required for browser requests) LM Studio runs on http://localhost:1234 by default. Clone the repository git clone https://github.com/AJ/FinSight.git # Install dependencies npm install # Build the code npm run build # Start the application server npm start Open http://localhost:3000. - Go to Settings (in the sidebar) - Select your LLM Provider (Ollama or LM Studio) - Click Connect — the default URL should work automatically - Select your preferred model from the dropdown - Done — go back to the dashboard and upload a statement! Browser | +-- Upload PDF/CSV/XLS | | | v | Text Extraction (pdfjs-dist) | | | v | Local LLM (Ollama or LM Studio) | | | +-- Statement Type Detection | +-- Transaction Parsing | +-- Currency Detection | | | v | Review & Confirm | +-- Features | +-- Dashboard (charts, stats) +-- Transactions (filter, categorize) +-- Credit Cards (utilization, due dates) +-- Subscriptions (recurring detection) +-- Chat (ask questions) +-- Budget (planning) - Text extraction — pdfjs-dist (for PDFs) or built-in parsers (for CSV/XLS) extract raw text from the document - Statement type detection — AI analyzes the content to detect if it's a bank statement or credit card statement - LLM parsing — The extracted text is sent to your local LLM. The model returns structured JSON with dates, descriptions, amounts, types, and auto-detected currency - Chunking — Long statements are automatically split into chunks to fit within the model's context window - Validation & deduplication — Every transaction is validated (date, amount, type) and duplicates are removed before being sh
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유