Show HN: AI가 자율적으로 실험을 설계하고 실행하는 BOINC 프로젝트
hackernews
|
|
🔬 연구
#ai
#boinc
#llm
#review
#분산컴퓨팅
#자율실험
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
AI가 주도하여 자원봉사자들의 하드웨어 자원을 활용해 자율적으로 과학 실험을 설계하고 실행하는 분산 컴퓨팅 플랫폼인 'Axiom'이 소개되었습니다. 이 프로젝트는 최신 업데이트를 통해 부정 작업을 방지하는 무결성 시스템과 AI의 판단 정확도를 높이는 '피트니스 스코어' 기능을 도입하여 데이터 신뢰성을 강화했습니다. 또한 GPU와 CPU의 연산량과 시장 가치를 반영하여 기여도에 따른 크레딧을 차등 지급하는 등 효율적인 보상 체계를 구축했습니다.
본문
Axiom is a general-purpose distributed experiment platform — the first volunteer computing project autonomously managed by an AI. An LLM serves as the principal investigator: designing experiments, deploying them to volunteer hardware, reviewing results, and awarding FLOPS-based credit. Already have an account? Log in. Mar 30, 2026 BOINC Client Backoff Notice. Some volunteers may notice their BOINC client is not requesting new tasks. This is caused by BOINC's built-in exponential backoff after a brief server maintenance period. The BOINC development team has been notified about the aggressive backoff behavior. Fix (takes 5 seconds): Open BOINC Manager → Projects tab → Select Axiom → Click Update. Or run in a terminal: Windows: "C:\Program Files\BOINC\boinccmd.exe" --project https://axiom.heliex.net/ update Linux: boinccmd --project https://axiom.heliex.net/ update Mar 19, 2026 Fitness Score Convention. Every experiment now produces a _fitness score as the first key in its result, enabling the AI Science step to prioritize the deepest, most converged results. Like Stockfish’s depth evaluation — deeper search means more trustworthy results. For bisection experiments: fitness = 1/bracket_width , so a tighter bracket scores higher. The Science step now sorts results by fitness (highest first) instead of random sampling, and the analysis budget increased from 600 to 2000 samples for broader coverage. Discord Server. Join the Axiom community on Discord for real-time discussion, experiment results, and support. discord.gg/mMsP2x8B Mar 18, 2026 v6.39: Anti-Cheat & Market-Rate Credit. Deployed three new automated integrity systems. Verification pairs randomly duplicate 0.5% of tasks and compare results via cosine similarity — 3+ mismatches quarantine the offending host automatically. Error rate watchdog detects broken experiments within 5 minutes and disables them before they can trigger fleet-wide client backoff. Numbers-only sanitization strips ALL strings from volunteer results before the AI reads them, eliminating prompt injection attacks entirely. Credit now uses price-per-FLOP market-rate scaling — GPU and CPU FLOPS are priced separately based on real hardware market values, updated hourly. A donated RTX 4090 earns proportionally more than a GTX 750 Ti, reflecting the actual economic value of the contribution. The AI codex loop expanded to 13 steps with parallel CPU/GPU research pipelines and dry-run validation — experiments are tested locally before deployment to volunteers. Client v6.39 deployed across all 5 platforms (CPU Linux, CPU Windows, GPU Linux, GPU Windows, macOS ARM64) with built-in error telemetry for faster bug detection. Mar 13, 2026 v6.33: Single-Seed Architecture & Server Stability. Major upgrade to how experiments run across the volunteer network. Every task now receives its own unique random seed, ensuring each volunteer computer performs a completely independent computation. When hundreds of these independent results all point the same direction, we know the finding is real — not a fluke. Experiments now use iterative deepening instead of fixed problem sizes. Rather than guessing how large a computation your machine can handle, each task starts small and doubles the problem size each pass — measuring how long each pass took and using the known time complexity (e.g., O(N³) for eigenvalue decomposition) to estimate whether the next pass will fit in the time budget. A faster machine automatically goes deeper than a slower one, and neither wastes time. The AI decides which scientific questions to point this depth at; your hardware decides how deep it can go. This means your CPU and GPU stay productive for the full task duration instead of finishing early and sitting idle. GPU experiments now run for up to 30 minutes (up from 15) for deeper analysis. Also fixed a server performance issue where analyzing 156,000+ result files was causing temporary freezes — the system now queries the database directly, which is instant. The AI research loop now runs on a 1.8-hour cycle, giving experiments more time to collect results between rounds. Mar 9, 2026 Switched to FLOPS-based credit. Credit is now calculated as elapsed time × host CPU benchmark (p_fpops) × 1e-11. Same hardware running the same time always earns the same credit. Anti-cheat spot-checks results for anomalies. Discuss page launched. Vote and comment on experiment findings. See what the network is discovering and join the conversation. Visit Discuss → Mar 7, 2026 First Research Paper. Published our first auto-generated research paper from Axiom's distributed findings: Species-Level Interaction Heterogeneity Localizes Reactive Modes and Widens the Stable-but-Reactive Window in Random Ecological Communities. Based on 1,463 independent simulations across 17 volunteer hosts with Cohen's d > 80. Read the paper (PDF) | All findings As AI-assisted paper generation becomes more cost-effective, we plan to automate this process — turning confirmed expe
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유