HN 표시: 원격 SSH 셸 세션에서 로컬 LLM 프롬프트 실행

hackernews | | 📦 오픈소스
#ai 딜 #anthropic #claude #cli #llama #llm #openai #ssh #터미널 #프롬프트
원문 출처: hackernews · Genesis Park에서 요약 및 분석

요약

해커 뉴스(Show HN)에 게시된 이 도구는 원격 SSH 셸 세션 내에서 로컬에 설치된 대형 언어 모델(LLM)을 실행하여 명령줄 인터페이스에서 바로 AI의 도움을 받을 수 있게 해줍니다. 이 기술은 데이터 유출을 방지하기 위해 외부 API 대신 로컬 머신의 GPU 및 모델 리소스를 활용하며, 원격 서버 작업 중 문맥 파악이나 스크립트 생성 등에 활용될 수 있습니다. 이를 통해 개발자는 보안성을 유지하면서도 터미널 환경에서 고품질의 AI 어시스턴트 기능을 사용할 수 있게 되었습니다.

본문

Quick Start | Documentation | Examples promptcmd is a manager and executor for programmable prompts. Define a prompt template once, then execute it like any other terminal command, complete with argument parsing, --help text, and stdin/stdout integration. Unlike tools that require you to manually manage prompt files or rely on implicit tool-calling, promptcmd gives you explicit control over what data your models access. Build compositional workflows by nesting prompts, executing shell commands within templates, and piping data through multi-step AI pipelines. Create a .prompt file, enable it with promptctl , and execute it like any native tool. $ promptctl create bashme_a_script_that $ bashme_a_script_that renames all files in current directly to ".backup" Prepend SSH commands with promptctl , your prompts magically appear in your remote shell sessions. $ promptctl ssh user@server server $ bashme_a_script_that renames all files in current directly to ".backup" Use your Ollama endpoint or configure an API key for OpenAI, OpenRouter, Anthropic, or Google. Swap between them with ease. $ promptctl create render-md $ cat README.md | render-md -m openai $ cat README.md | render-md -m ollama/gpt-oss:20b Distribute requests across several providers with equal or weighted distribution for cost optimization. # config.toml [groups.balanced] providers = ["openai", "google"] $ cat README.md | render-md -m balanced Cache responses for a configured amount of time for adding determinism in pipelines and more efficient token consumption. # config.toml [providers.openai] cache_ttl = 60 # number of seconds Set/Override during execution: $ cat README.md | render-md --config-cache-ttl 120 Use Variants to define custom models with own personality or specialization in tasks: [providers.anthropic] api_key = "sk-xxxxx" model = "claude-sonnet-4-5" [providers.anthropic.glados] system = "Use sarcasm and offending jokes like the GlaDoS character from Portal." [providers.anthropic.wheatley] system = "Reply as if you are Wheatley from Portal." $ tipoftheday -m glados $ tipoftheday -m wheatley Linux/macOS: curl -LsSf https://installer.promptcmd.sh | sh macOS (Homebrew): brew install tgalal/tap/promptcmd Windows (PowerShell): powershell -ExecutionPolicy Bypass -c "irm https://installer-ps.promptcmd.sh | iex" Configure your API keys by editing config.toml : promptctl config edit Find your provider's name, e.g., for anthropic: [providers.anthropic] api_key = "sk-ant-api03-..." Alternatively, you can set the keys via Environment Variables: PROMPTCMD_ANTHROPIC_API_KEY="your_api_key" PROMPTCMD_OPENAI_API_KEY="your_api_key" PROMPTCMD_ANTHROPIC_API_KEY="your_api_key" PROMPTCMD_OPENROUTER_API_KEY="your_api_key" Create a summarize.prompt file: promptctl create summarize Insert the following: --- model: anthropic/claude-sonnet-4-5 input: schema: words?: integer, Summary length in words --- Summarize the following text{{#if words}} in {{words}} words{{/if}}: {{STDIN}} Enable and use it: # Enable as a command promptctl enable summarize # Use it cat article.txt | summarize echo "Long text here..." | summarize --words 10 # Auto-generated help summarize --help That's it. Your prompt is now a native command. Full documentation available at: docs.promptcmd.sh Browse the Examples directory or visit https//promptcmd.sh/lib for interactive viewing. GPLv3 License - see LICENSE file for details

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →