보석. Google Lighthouse 테스트를 사용한 Claude Code
hackernews
|
|
📦 오픈소스
#ai
#claude
#claude code
#lighthouse
#tip
#성능 최적화
#코드 검증
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
AI 코딩 어시스턴트(Claude Code, Cursor 등)가 무겁고 느린 코드를 배포하는 것을 방지하기 위해, Lighthouse 테스트와 Google Analytics, Search Console 데이터를 기반으로 코드 성능을 검증하는 자동화 시스템이 소개되었습니다. 이 시스템은 성능 점수 100%를 달성할 때까지 AI가 직접 코드를 리팩토링하는 자가 치유(Self-healing) 루프를 실행하며, 실제 사용자의 핵심 웹 지표(Core Web Vitals)가 저하되거나 검색 순위가 하락하는 경우에도 코드 커밋을 차단합니다. 단 한 번의 명령어로 초기 설정을 마칠 수 있어 개발자의 수동 작업을 최소화할 뿐만 아니라, 최적화된 빠르고 안정적인 웹 환경 구축을 강제할 수 있습니다.
본문
Stop your AI from pushing bloated, slow code to production. This is a complete performance verification system that forces AI coding assistants (Claude Code, Cursor, GitHub Copilot, etc.) to validate code against three critical measures: - Synthetic Tests — Lighthouse (100% performance required) - Real User Data — Google Analytics Core Web Vitals (must not regress) - Search Health — Google Search Console (rankings, crawl errors, indexing) If any check fails, the AI automatically refactors its own code until it passes all three. ✅ Bulletproof Code — AI validates against both test scores AND real user data ✅ Zero Manual Setup — One command installs everything (Lighthouse + GA + GSC) ✅ Self-Healing — AI reads performance errors and fixes them automatically ✅ Graceful Degradation — Works with just Lighthouse on day one, add GA/GSC later ✅ Distribution Ready — Curl these files into any project Drop these files into your project: # 1. Core files curl -O https://raw.githubusercontent.com/YOUR_USERNAME/ai-performance-guardrails/main/CLAUDE.md curl -O https://raw.githubusercontent.com/YOUR_USERNAME/ai-performance-guardrails/main/lighthouserc.js mkdir -p scripts curl -o scripts/check-speed.js https://raw.githubusercontent.com/YOUR_USERNAME/ai-performance-guardrails/main/scripts/check-speed.js # 2. Install Lighthouse npm install -D @lhci/cli Then add to package.json : { "scripts": { "check-speed": "node scripts/check-speed.js" } } Done. Claude Code will load CLAUDE.md and enforce Lighthouse checks. First time setting up Google Cloud credentials? → Follow the credential setup guide (walks through everything step-by-step) Already have credentials? - Grab the setup script: curl -o setup.sh https://raw.githubusercontent.com/YOUR_USERNAME/ai-performance-guardrails/main/setup.sh bash setup.sh The script will: - Ask for your GA Property ID and GSC Site URL - Prompt for your Google Cloud service account JSON file path - Install MCP servers (analytics-mcp, mcp-server-gsc) - Wire everything into Claude Code settings - Run a smoke test - Add these files to your project (same as Option A): { "scripts": { "check-speed": "node scripts/check-speed.js" } } - Restart Claude Code so it loads the new MCP servers Done. The full 3-phase loop is now active. Every code change is validated to achieve this: ┌─────────────┬──────────────┬─────────────┬────────┐ │ Performance│ Accessibility│ Best │ SEO │ │ 100 │ 100 │ Practices │ 100 │ │ │ │ 100 │ │ └─────────────┴──────────────┴─────────────┴────────┘ Plus: Real user metrics don't regress (GA Core Web Vitals, GSC rankings) node scripts/check-analytics.js baseline The AI captures: - 📊 Core Web Vitals (LCP, CLS, INP) from Google Analytics - 📈 Top slowest pages by URL - 🔍 Top search queries and CTR from Google Search Console ⚠️ Any crawl errors or indexing failures These metrics are saved to .perf-baseline.json — the bar the AI cannot regress below. If GA/GSC not configured: Phase 1 is skipped silently. Lighthouse still runs. npm run check-speed (repeats until 100%) - You ask Claude Code to add/update code - Claude builds and runs Lighthouse - Results: - ✅ 100%? Proceed to Phase 3 - ❌ Below 100%? Claude reads the error, identifies the bottleneck (large bundles, unoptimized images, main thread blocking), refactors, and retries This loop repeats until Lighthouse reports 100% performance. node scripts/check-analytics.js validate The AI compares current metrics against the baseline from Phase 1: - 📊 Did Core Web Vitals improve or hold steady? - 🔍 Did search rankings or CTR drop? ⚠️ Are there new crawl errors? Results: - ✅ Improved or stable? Code is committed. Done. - ❌ Regression detected? Claude must refactor before committing. If GA/GSC not configured: Phase 3 is skipped silently. Code commits after Phase 2. | Service | Metrics | What It Catches | |---|---|---| | Lighthouse | Performance, Accessibility, Best Practices, SEO | Bloated bundles, unoptimized images, render-blocking resources, CLS issues | | Google Analytics | LCP, CLS, INP (Core Web Vitals) | Real user slowdowns that don't show in synthetic tests | | Search Console | Rankings, CTR, impressions, crawl errors | SEO regressions, indexing problems, search visibility drops | Edit lighthouserc.js : 'categories:performance': ['error', { minScore: 0.95 }], // 95% instead of 100% 'categories:accessibility': ['warn', { minScore: 0.8 }], If you're not using Next.js, edit scripts/check-speed.js : execSync('yarn build', { stdio: 'inherit' }); // Your build command execSync('yarn dev', { stdio: 'inherit' }); // Your dev server command In lighthouserc.js : 'cumulative-layout-shift': ['off'], // Don't enforce CLS 'first-contentful-paint': ['off'], // Don't enforce FCP "Cannot find module @lhci/cli" npm install -D @lhci/cli "Cannot connect to localhost:3000" - Ensure npm run start ornpm run dev works for your project - Update the start command in scripts/check-speed.js "Service account file not found" - Download your service account JSON from Google Cloud Console - Ensure the path in .env.analytics is absolute "GA authentication failed (401)" - Verify the service account email has "Analytics Viewer" role in GA - Check your property ID matches GA4 (not Universal Analytics) "GSC authentication failed (403)" - Ensure the service account email is added as a "Property Administrator" in GSC - Use the exact site URL (with https:// and trailing slash, or sc-domain: prefix) "Data is older than 28 days" - GA API requires at least 24 hours of data. If your site is new, Phase 1 may show minimal data initially. - The tool still works — Phase 2 (Lighthouse) runs regardless. This is rare but can happen if: - Real user experience differs from synthetic test environment - Your hosting/CDN has regional performance issues - Third-party scripts (analytics, ads) are slower in production Check .perf-baseline.json and the validate report to see which metric regressed, then focus your optimization there. You: "Add a carousel with 50 product images" Claude Code: ├─ Phase 1: node scripts/check-analytics.js baseline │ └─ Saves baseline: LCP=2.1s, CLS=0.08, 15 top queries recorded │ ├─ Phase 2: npm run check-speed (loop) │ ├─ Writes carousel component with img tags │ ├─ Lighthouse: 34% (images unoptimized, 12MB bundle) │ ├─ Refactors: next/image, WebP, lazy loading │ ├─ Lighthouse: 87% (still has unused JS) │ ├─ Removes dead code, code-splits routes │ ├─ Lighthouse: 100% ✓ │ └─ Phase 3: node scripts/check-analytics.js validate ├─ Current: LCP=1.9s, CLS=0.05 (IMPROVED ✓) ├─ Queries: same 15, CTR stable (GOOD ✓) └─ Code committed Found a bug? Have a feature idea? Submit a PR! We welcome contributions for: - Framework-specific optimizations (Vue, Svelte, etc.) - CI/CD examples (GitHub Actions, GitLab, Vercel, etc.) - Extended analytics support (Plausible, Fathom, Mixpanel) - Additional validators (WebPageTest, SpeedCurve) MIT — Use freely in your projects. Built to keep the web fast. ⚡
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유