1. **Analyze the Request:** * **Role:** News summarization expert. * **Task:** Summarize the provided article (title + text) concisely in 2-4 sentences. * **Focus:** Most important information. * **Constraints:** DO NOT copy-paste. DO NOT just restate the title. Add new info. Answer in English. * **Input:** Title ("Claude Code told me what tools it needs to work faster") and an "Article Text" section which only contains the Title again (repeated twice). 2. **Analyze the Input Text:** * Title: "Claude Code told me what tools it needs to work faster" * Article Text: "Claude Code told me what tools it needs to work faster" 3. **Identify the Problem:** The "Article Text" provided is just the title repeated. There is no actual body text to summarize. 4. **Determine the Strategy:** * Since I cannot access external links (the source is listed as "hackernews" but no link is provided), and the input text is empty/substantially void, I cannot fulfill the "Add new information beyond the title" constraint in a literal sense based *only* on the provided text. * *However*, usually, these prompts come from a context where the model might have access to the content via its training data or the user made a mistake in pasting. * *Correction/Refinement:* As an AI, I don't have real-time browsing access to specific Hacker News articles unless they are in my training data or provided in the context. The prompt implies the text *is* provided. Since the text is just the title, I have to look at the metadata or context if available. But here, the prompt says "Source: hackernews" and "Article Text
본문
Claude Code told me what tools it needs to work faster. Oh boy I was missing so many things. My Claude was craving for better tooling. Your AI coding assistant has an environment: the binaries in your PATH the MCP servers in your config the aliases in your shell That environment directly affects the quality of what it produces and its efficiency. Do you know if it’s optimized for your agent? How do you know? I decided to ask. “What tools are you missing to be effective on my machine? Analyze what’s installed, what’s missing, what’s broken, what’s redundant. Prioritize by impact on your ability to help me.” It launched six parallel subagents, looped through every binary in my PATH, parsed my Homebrew packages, checked my MCP server configuration, inspected my shell aliases, and cataloged my global npm installs. It came back with a prioritized report: critical gaps, high-value upgrades, broken configs, and things I should uninstall. (!!!) Claude Code (probably) doesn't actually have preferences. It's generating recommendations based on patterns from its training data and its knowledge of what tools its own codebase-analysis features depend on. But that's precisely what makes the exercise useful. It knows what tools it can invoke and what happens when they're missing. The tools it said it needs Beyond CLI, it also mentions some MCP servers but I won’t focus on them: @anthropic/mcp-server-fetch|memory|filesystem ripgrep: a better grep: it's fast and respects `.gitignore` in git repositories. fd: the modern find. Claude always need to look into files. When it writes shell commands dozens of times per session, shorter commands mean fewer syntax errors and less wasted context. fzf, for interactive filtering. When Claude builds piped command chains like fd -e ts | fzf to let you select a file interactively. DuckDB was the one I didn't expect. Claude wanted it for ad-hoc data analysis: running SQL directly on CSV, Parquet, or JSON files without import steps or server setup. It's a ~30MB binary with zero external dependencies. Claude's argument: "When you ask me to analyze data, I currently have to write Python scripts or parse things with jq. With DuckDB, I write one SQL query." $ brew install ripgrep fd fzf duckdb Better output for the AI to parse Claude identified tools that improve the structure of the output it reads. git-delta makes git diffs more parseable by adding line numbers and cleaner context boundaries. Raw “git diff” output is a wall of text with minimal structure. Delta breaks it into sections the AI can navigate more accurately. Ask Claude to setup its config properly for LLM consumption - the default is not good. xh is curl with structured output. When Claude tests API endpoints, xh separates headers, status codes, and body cleanly. I don’t see massive different compared to curl -v but if Claude says it’s better (…). $ brew install git-delta xh Automation that saves context tokens Two tools that reduce back-and-forth in sessions: watchexec watches for file changes and reruns commands automatically. watchexec -e rs -- cargo test replaces Claude writing polling loops or asking you to re-run things manually. just as a task runner. When Claude bootstraps projects, it often creates Makefiles. Justfile is just simpler. brew install watchexec just Static analysis with real tools This one is a commercial option, but basically, it just means: add any static code analysis to your pipelines! semgrep lets Claude run static code analysis rules to be deterministic. When you ask for security review, there's a difference between "the AI thinks this looks like SQL injection" and "semgrep rule python.django.security.injection.sql flagged this line.". This is ABSOLUTELY the right kind of feedback loop to have in any LLM loop. brew install semgrep The pattern The specific tools matter less than what this exercise revealed. Addy Osmani argues for treating the LLM as a pair programmer that needs clear direction, context, and the right tools. We set up laptops for new engineers. We give them a .env files, IDE, extensions, various CLIs, credentials. We must do the same for for the AI writing code with us. Their tooling is different from ours. If you use Nix flakes or dev containers, you could version-control this setup and make it reproducible, that would include the AI's preferred tools alongside your own. For fellow macOS users, the one liner: brew install ripgrep fd fzf duckdb git-delta xh watchexec just semgrep The best way to get more from your AI coding assistant isn't just a better prompt, it's a better PATH. I use a tools.md file for ease of maintenance and tell my agent/claude.md to read it. Since these tools do a lot of searching, using rust-based alternatives can really speed things up. For projects with a lot of files, building file manifests broken out by file type or concern can be useful. I am working on a large project with about 2M image files and another 400-500 code-related files. Rather than search them