Show HN: 연구 논문을 10배 더 빠르게 게시하기 위한 오픈 소스 Claude Code 설정
hackernews
|
|
📦 오픈소스
#claude
#claude code
#논문 작성
#머신러닝
#머신러닝/연구
#연구 자동화
#오픈소스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
오픈소스 도구인 RStack은 연구자가 단순 반복 작업에 쓰는 시간을 최소화하고, 아이디어 도출부터 제출용 논문 작성까지의 전 과정을 자동화하는 클라우드 기반 파이프라인입니다. 문헌 고찰(/lit-review), 실험 설계 및 Modal 클라우드 GPU를 활용한 모델 학습(/experiment), arXiv 등 학회 양식에 맞춘 LaTeX 논문 작성(/write-paper) 등 각 기능이 개별 작동하면서도 유기적으로 연결됩니다. 특히 별도의 데이터베이스나 백엔드 없이 Claude Code를 런타임으로 활용하며, 매 단계마다 연구자가 검토할 수 있는 체크포인트를 제공하여 자율성을 보장합니다.
본문
Research automation skills for Claude Code. Type /research and go from idea to submittable paper. Each skill works standalone but chains together into a full pipeline. A PhD student with a deadline spends 80% of their time on grunt work: finding papers, provisioning GPUs, formatting LaTeX. RStack compresses that to near-zero. The thinking stays with the researcher. | Skill | What it does | When to use | |---|---|---| /research | Full pipeline: idea to paper in one session | "Write a paper about...", "research this" | /lit-review | Find papers, structured summary, gap analysis | "Find papers about...", "literature review" | /novelty-check | Assess novelty, refine hypothesis | "Is this novel?", "check existing work" | /experiment | Generate code, run on cloud GPU (Modal), iterate | "Run experiments", "train a model" | /analyze-results | Publication-ready figures, tables, statistics | "Make figures", "analyze results" | /write-paper | Venue-formatted LaTeX with real results and citations | "Write the paper", "format for arXiv" | /setup | Configure Modal, tectonic, venue preferences | First-time setup | /rstack-upgrade | Upgrade to latest version | "Upgrade rstack" | git clone --single-branch --depth 1 https://github.com/sunnnybala/Rstack.git ~/.claude/skills/rstack cd ~/.claude/skills/rstack && ./setup Then in Claude Code, run /setup to configure Modal and install tectonic. For teams (vendored into project): cp -Rf ~/.claude/skills/rstack .claude/skills/rstack rm -rf .claude/skills/rstack/.git cd .claude/skills/rstack && ./setup --local The full pipeline: /research "Investigate whether mixture-of-experts improves efficiency of small language models on code generation tasks" Individual skills: /lit-review "transformer efficiency for code generation" /novelty-check # compare idea against found papers /experiment # generate and run experiments on Modal /analyze-results # create figures and tables /write-paper # write arXiv-formatted paper IDEA → /lit-review → /novelty-check → /experiment → /analyze → /write-paper → PAPER ↑ ↑ ↑ ↑ ↑ ↑ └─────────┴──────────────┴───────────────┴────────────┴────────────┘ revision loops at every checkpoint Every phase transition is a human checkpoint. You approve the literature review before novelty assessment. Approve the experiment plan before cloud submission. Review each paper section before the next. The pipeline is iterative, not linear. Each skill is a SKILL.md file that Claude Code reads and follows. No backend, no database, no custom agents. Work products live at your project root as normal files. Structured logs persist in .rstack/ . Cloud compute happens through Modal CLI commands that Claude runs directly, same pattern as GStack running git push or gh pr create . - Pure SKILL.md files — no Express, no React, no Postgres. Claude Code IS the runtime. - Work products at project root — visible files (paper.tex, figures, idea.md). JSONL plumbing in .rstack/ . - Modal for cloud compute — Claude runs modal run train.py directly. No wrappers. - Two-phase install — offline bootstrap ( ./setup ) + interactive auth (/setup skill). - Credentials in native stores — Modal auth stays in ~/.modal.toml . Never in RStack config. See ARCHITECTURE.md for the full design rationale. - Claude Code (or any Claude Code-compatible agent) - Python 3.8+ - Modal (for cloud GPU experiments): pip install modal && modal token new - tectonic (for LaTeX compilation): installed via /setup Work products live at the project root as normal, visible files. Internal plumbing (structured JSONL logs) lives in .rstack/ . my-project/ # Git root ├── idea.md # Your research idea ├── lit-review.md # Human-readable literature review ├── refined-idea.md # Sharpened hypothesis (from /novelty-check) ├── novelty-assessment.md # Novelty analysis with score ├── experiment-plan.md # Experiment design document ├── train.py # Generated experiment code ├── requirements.txt # Experiment dependencies ├── results/ # Raw outputs from cloud │ └── run-001/ │ ├── metrics.json │ ├── stdout.log │ └── figures/ ├── analysis/ # Publication-ready figures + tables │ ├── figures/ # PNG + PDF │ ├── tables/ # LaTeX source │ └── stats.json # Statistical summary ├── paper.tex # The paper ├── paper.bib # BibTeX citations ├── paper.pdf # Compiled paper └── .rstack/ # Internal plumbing (hidden) ├── lit-review.jsonl # Structured paper records └── experiments.jsonl # Append-only experiment log Global config at ~/.rstack/config.yaml : bin/rstack-config get venue # read: arxiv bin/rstack-config set venue icml # write bin/rstack-config list # show all RStack respects your privacy. Telemetry is off by default. | Tier | What's collected | Shared remotely | |---|---|---| off (default) | Nothing | No | anonymous | Skill name, duration, outcome, OS | Yes (no device ID) | community | Same + stable device ID | Yes | No code, file paths, repo names, or research content is ever collected or sent. rstack-config set telemetry community # opt in rstack-config set tele
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유