Show HN: 드리프트 – AI 지원 코딩으로 인한 아키텍처 침식 포착
hackernews
|
|
📦 오픈소스
#ai 코딩
#review
#개발 도구
#아키텍처
#오픈소스
#정적 분석
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
파이썬용 정적 분석 도구인 'Drift'는 AI 코딩 도구 사용으로 인한 아키텍처 침식과 기술 부채를 탐지하기 위해 개발되었습니다. Ruff나 Semgrep과 달리 이 도구는 파일 간 종속성 위반, 패턴 파편화, 중복 코드 등을 감지하는 6가지 신호를 통해 복합적인 드리프트 점수를 산출합니다. 현재 Python 3.11 이상 환경을 지원하며, 실제 코드베이스 15개를 대상으로 한 연구를 통해 그 효과가 입증되었습니다.
본문
Deterministic architecture erosion detection for AI-accelerated codebases 97.3% precision (single-rater) · 15 signals · deterministic · no LLM in pipeline · full study · docs pip install drift-analyzer drift analyze --repo . ╭─ drift analyze myproject/ ──────────────────────────────────────────────────╮ │ DRIFT SCORE 0.52 Δ -0.031 ↓ improving │ 87 files │ AI: 34% │ 2.1s │ ╰──────────────────────────────────────────────────────────────────────────────╯ Module Score Bar Findings Top Signal src/api/routes/ 0.71 ██████████████░░░░░░ 12 PFS 0.85 src/services/auth/ 0.58 ███████████░░░░░░░░░ 7 AVS 0.72 src/db/models/ 0.41 ████████░░░░░░░░░░░░ 4 MDS 0.61 ◉ PFS 0.85 Error handling split 4 ways → src/api/routes.py:42 → Next: consolidate into shared error handler ◉ AVS 0.72 DB import in API layer → src/api/auth.py:18 → Next: move DB access behind service interface Drift finds the architecture erosion AI-generated code silently introduces: pattern fragmentation, boundary violations, near-duplicate utilities, and structural hotspots that pass tests but weaken the codebase. Designed for Python teams that want fast structural feedback without adding an LLM to the analysis path. - Run it: Quick Start and Configuration - Evaluate: Example Findings, Trust and Evidence, Stability - Contribute: CONTRIBUTING.md, DEVELOPER.md, POLICY.md - uses: sauremilk/drift@v1 with: fail-on: none upload-sarif: "true" Built for AI-assisted sessions where an LLM writes most code and you steer. drift scan --repo . --max-findings 5 # session start: agent learns baseline drift diff --uncommitted # before commit drift diff --staged-only # index only drift diff --diff-ref main # compare against main drift check --repo . --fail-on high # CI gate Each call returns accept_change: true | false with blocking reasons the agent can act on directly. Drift can run as an MCP (Model Context Protocol) server so AI agents can call analysis tools directly over stdio. Install MCP support: pip install drift-analyzer[mcp] Start the server: drift mcp --serve Minimal VS Code setup in .vscode/mcp.json : { "servers": { "drift": { "type": "stdio", "command": "drift", "args": ["mcp", "--serve"] } } } Common agent-native calls: drift scan --repo . drift diff --staged-only drift validate --repo . drift fix-plan --repo . See Integrations and API and Outputs for details. - run: drift check --repo . --fail-on high Same signals, same deterministic engine — no LLM involved at analysis time. Your linter, type checker, and test suite can tell you whether code is valid. They do not tell you whether the repository is quietly splitting into incompatible patterns across modules. Drift focuses on that gap: - Ruff / formatters / type checkers: local correctness and style, not cross-module coherence. - Semgrep / CodeQL / security scanners: risky flows and policy violations, not architectural consistency. - Maintainability dashboards: broad quality heuristics, not a drift-specific score with reproducible signal families. Current public evidence: 15 real-world repositories in the study corpus, 15 scoring signals, and auto-calibration that rebalances weights at runtime. Full study → · Trust & limitations Problem: A FastAPI service has 4 connectors, each implementing error handling differently — bare except , custom exceptions, retry decorators, and silent fallbacks. Solution: drift analyze --repo . --sort-by impact --max-findings 5 Output: PFS finding with score 0.96 — "26 error_handling variants in connectors/" — shows exactly which files diverge and suggests consolidation. Problem: A database model file imports directly from the API layer, creating a circular dependency that breaks test isolation. Solution: drift check --fail-on high Output: AVS finding — "DB import in API layer at src/api/auth.py:18" — blocks the CI pipeline until the import direction is fixed. Problem: AI code generation created 6 identical _run_async() helper functions across separate task files instead of finding the existing shared utility. Solution: drift analyze --repo . --format json | jq '.findings[] | select(.signal=="MDS")' Output: MDS findings listing all 6 locations with similarity scores ≥ 0.95, enabling a single extract-to-shared-module refactoring. name: Drift on: [push, pull_request] jobs: drift: runs-on: ubuntu-latest permissions: contents: read security-events: write steps: - uses: actions/checkout@v4 with: fetch-depth: 0 - uses: sauremilk/drift@v1 with: fail-on: none # report findings without blocking CI upload-sarif: "true" # findings appear as PR annotations Once the team has reviewed findings for a few sprints, tighten the gate: - uses: sauremilk/drift@v1 with: fail-on: high # block only high-severity findings upload-sarif: "true" drift check --fail-on none # report-only drift check --fail-on high # block on high-severity findings The fastest way to add drift to your workflow: # .pre-commit-config.yaml repos: - repo: https://github.com/sauremilk/drift rev: v0.10.2 hooks: - id: drift-check # blocks o
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유