1. **Analyze the Request:** * **Role:** News summarization expert. * **Task:** Summarize the provided article. * **Constraints:** 2-4 sentences, concise, focus on important info, no copy-pasting, no restating the title (add new info), answer in English. * **Input:** Title ("Show HN: WTF-CLI – An AI-powered terminal error solver written in Rust") and minimal text. 2. **Analyze the Source Material:** * *Title:* Show HN: WTF-CLI – An AI-powered terminal error solver written in Rust * *Article Text:* The provided "Article Text" is just the Title repeated. There is no body text provided in the prompt. 3. **Identify the Challenge:** * The prompt asks to "Add new information beyond the title," but the input *only* contains the title. * However, I am an AI with internal knowledge. "Show HN" posts are usually Hacker News submissions. "WTF-CLI" is a known open-source tool. * *Self-Correction/Constraint Check:* The prompt says "Summarize the following article." Usually, if the text isn't provided, I should stick to what's there or acknowledge the limitation. However, in many of these evaluation tasks, if the user provides a title and a short blurb, I might be expected to hallucinate/know the context or simply summarize the intent based on the title *if* I can't see the link. * *Wait, looking at the prompt structure:* The prompt provides "Article Text" which is identical to the "Title". * *Strict Interpretation:* I cannot summarize text that isn't there. I can only summarize the title. * *But the prompt explicitly says:* "DO NOT simply re
본문
wtf-cli is a command-line interface wrapper that seamlessly runs your terminal commands and, if they fail, intercepts the error output to provide an AI-generated solution on the spot. It supports local models via Ollama, as well as cloud-based ones via OpenAI, Gemini, or OpenRouter. - Seamless wrapping: Just prepend wtf to your command. If it works, you get your normal output. If it fails, the AI jumps in. - Privacy first: The primary focus is running local AI models using Ollama, meaning no API costs and total privacy. - Cloud Fallbacks: Supports OpenAI ( OPENAI_API_KEY ), Gemini (GEMINI_API_KEY ), and OpenRouter (OPENROUTER_API_KEY ) fallbacks. - Clear structure: Provides actionable, structured outputs so you know exactly what failed and the command to fix it. - Rust & Cargo (latest stable version recommended) - Optional (but recommended): Ollama running locally for free, private AI analysis. cargo install wtf-cli - Clone the repository: git clone https://github.com/yourusername/wtf-cli.git cd wtf-cli - Install the binary using Cargo: cargo install --path . - Ensure the Cargo bin directory is in your system's PATH . You can copy these commands exactly; your shell will automatically expand variables like$HOME or$env:USERPROFILE .Linux / macOS (Bash/Zsh): export PATH="$HOME/.cargo/bin:$PATH" Add this to your ~/.bashrc or~/.zshrc to make it permanent.Windows (PowerShell): $env:Path += ";$env:USERPROFILE\.cargo\bin" To make this permanent, add it to your PowerShell Profile or use the 'Environment Variables' GUI. - Configure your preferred AI provider: wtf --setup To update to the latest version, simply run: cargo install wtf-cli If you've installed wtf-cli from source and want to pull the latest changes: - Navigate to your local wtf-cli repository:cd path/to/wtf-cli - Pull the latest code: git pull origin main - Re-install the project (the --force flag ensures the old binary gets overwritten):cargo install --force --path . Simply prepend wtf to any command you usually run. # Example 1: A failing npm script wtf npm run build # Example 2: Exploring a non-existent directory wtf ls /fake/directory If the command succeeds, it will gracefully exit just like normally. If it fails, wtf will capture the error output, send it to the configured AI, and print the diagnosis and suggested fix back to you. You can easily configure your preferred AI provider by running wtf --setup . This command will present an interactive menu allowing you to choose between Ollama, OpenAI, Gemini, and OpenRouter using your arrow keys. It will automatically create or update a .env file in the current directory with your selection. Alternatively, you can manually create a .env file in the directory where you run the tool. A template is provided in .env.example : cp .env.example .env Or set these Environment Variables globally: # AI Provider (auto-detected if not set) # Options: ollama, openai, gemini, openrouter WTF_PROVIDER=ollama # Ollama (Default provider) OLLAMA_MODEL=qwen3.5:9b OLLAMA_HOST=http://localhost:11434 # OpenAI Fallback OPENAI_API_KEY=your_openai_key_here OPENAI_MODEL=gpt-4o-mini # OPENAI_API_BASE=https://api.openai.com/v1 # Gemini Fallback GEMINI_API_KEY=your_gemini_key_here GEMINI_MODEL=gemini-2.0-flash # OpenRouter Fallback OPENROUTER_API_KEY=your_openrouter_key_here OPENROUTER_MODEL=arcee-ai/trinity-mini:free This project was vibe-coded MIT