Show HN: 필요에 따라 자체 계층 구조를 구축하는 뇌에서 영감을 받은 신경망
hackernews
|
|
📦 오픈소스
#htm
#계층 구조
#머신러닝/연구
#시계열 데이터
#신경망
#예측 모델
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
This project introduces a neural network architecture designed to mimic the brain by dynamically constructing its own hierarchy as needed. Unlike traditional models with fixed structures, this system adapts its internal organization on demand, potentially offering greater flexibility and efficiency in processing complex data patterns.
본문
A hierarchical temporal neural network that learns patterns from raw sequential data, builds its own neuron hierarchy on demand, and makes predictions through a voting mechanism inspired by how cortical columns reach consensus. No training epochs. No backpropagation. No labeled data. You feed it streams of events — stock prices, text characters, sensor data — and it self-organizes. Neurons form, compete, decay, and die. The ones that make good predictions survive. This is the Node.js reference implementation. A high-performance C++ core with Python and Node.js bindings is in development. The brain is a prediction machine. Every neuron exists to predict what comes next. Learning happens when predictions fail. Each frame, the brain: - Observes — receives events from input channels (prices, characters, pixels, etc.) - Activates — finds or creates neurons for the observations - Recognizes — checks if any learned patterns match the current context - Learns connections — strengthens links between co-occurring neurons - Learns from errors — when a confident prediction fails, creates a pattern to remember the context - Votes — all active neurons vote on what happens next, weighted by level and recency - Acts — executes the winning action predictions through output channels - Decays — unused connections and patterns weaken over time Hierarchy emerges from failure. When a base neuron's prediction fails, a level-1 pattern is created. When that pattern's prediction fails, a level-2 pattern is created. Abstraction isn't designed — it's earned. Voting enables consensus. There's no central controller. Every active neuron contributes its prediction, weighted by its level in the hierarchy and how recently it was activated. Higher-level patterns carry more weight because they represent more context. Patterns override connections. When a pattern activates on a parent neuron, it suppresses the parent's raw connection predictions. This is how the brain corrects itself — patterns exist specifically to fix prediction errors. Time is structural. Temporal distance is encoded directly in connections. A connection doesn't just say "A predicts B" — it says "A predicts B at distance 3" (three frames later). This makes sequences first-class citizens. Multiple channels converge. One data stream is mediocre. Many streams together is where it gets powerful — cross-modal patterns emerge naturally when multiple channels feed into the same brain. # Clone the repository git clone https://github.com/cucar/robot_brain.git cd robot_brain # Install dependencies npm install The brain learns to trade stocks from historical price and volume data. Each stock is a separate channel — the brain discovers cross-stock patterns and makes buy/sell/hold decisions optimized by reward feedback. The included 3-hour timeframe data is ready to use — no API key needed for this demo. node run-brain.js stock-test --timeframe 3H Expected output: Final Training Results (1 episodes): ============================================================ 📈 Overall Performance: Starting Capital: $15000.00 Total Net Profit: $221157.70 Average per Episode: $221157.70 Average ROI: +1474.38% Average Per-Frame ROI: +0.110098% Total Trades: 1268 Average Trades per Episode: 1268.0 💰 Net Profit & ROI by Episode: Episode 1: $221157.70 | ROI: +1474.38%, +0.110098%/frame (1268 trades) 📊 Base Level Accuracy by Episode: Episode 1: 55.83% The brain achieves 56% base-level prediction accuracy on price movements (which is expected — markets are noisy), but the reward-weighted action selection turns that into profitable trading by learning which contexts produce better outcomes. To download new data or different timeframes, you need a free Alpaca account: - Sign up at alpaca.markets (free paper trading account) - Get your API key and secret from the dashboard - Copy .env.example to.env and fill in your credentials:ALPACA_KEY_ID=your_key_here ALPACA_SECRET_KEY=your_secret_here - Download data: node stock-download.js --timeframe=3H - Process and run: node run-setup.js stock-test --timeframe 3H node run-brain.js stock-test --timeframe 3H The brain memorizes a repeating stock price sequence across 5 episodes, reaching 95%+ prediction accuracy. This demonstrates convergence on financial data — the same learning curve seen in text memorization. Before running, adjust the hyperparameters for stock memorization: In jobs/stock-test.js , use only 3 stocks: symbols: ['KGC', 'GLD', 'SPY'], In brain/memory.js , change contextLength to 3 : this.contextLength = 3; In brain/neuron.js , change the forget rates to 0.0001 : static connectionForgetRate = 0.0001; static patternForgetRate = 0.0001; In brain/brain.js , change the error correction to 0.3 : this.errorCorrectionThreshold = 0.3; Then run: node run-brain.js stock-test --timeframe 3H --episodes 5 --no-summary Expected output: 🎯 Final Training Results (5 episodes): ============================================================ 📈 Overall Performance: Starting Capi
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유