HN 표시: 크래비, 오픈 소스 웹+API 합성 모니터링
hackernews
|
|
💼 비즈니스
#api
#tip
#모니터링
#오픈소스
#웹 성능
#크래비
요약
'Crabby'는 웹사이트 및 API의 가용성과 성능을 측정하는 오픈소스 합성 모니터링 도구로, DNS 해결 시간부터 DOM 렌더링 시간까지 다양한 메트릭을 수집합니다. 이 도구는 Go의 기본 HTTP 클라이언트를 사용하는 심플 프로브, 헤드리스 크롬을 이용하는 브라우저 프로브, 그리고 다단계 API 워크플로우 테스트가 가능한 API 프로브 등 세 가지 유형의 측정 방법을 제공합니다. 수집된 성능 데이터는 Prometheus, DogStatsD, InfluxDB, Splunk, PagerDuty 등 다양한 백엔드로 동시에 전송 및 모니터링할 수 있으며, 주로 제공되는 Helm 차트를 통해 쿠버네티스 환경에 배포하는 것을 권장합니다.
왜 중요한가
개발자 관점
검토중입니다
연구자 관점
검토중입니다
비즈니스 관점
검토중입니다
본문
crabby is a website performance tester that measures page load times and reports the measurements to a collection endpoint for processing, monitoring, and viewing. Crabby can collect and report these metrics: - DNS resolution time - TCP connection time - TLS negotiation time - HTTP Response Code - Remote server processing time - Time to first byte (TTFB) - Server response time - DOM rendering time Crabby currently supports these metrics delivery backends. You can enable any combination of them simultaneously and Crabby will send metrics to all of them: - Prometheus - Time measurements as metrics, exposed via a Prometheus endpoint - DogStatsD - Time measurements as metrics via Datadog's DogStatsD protocol - InfluxDB - Time measurements as metrics using the InfluxDB v2 wire protocol over HTTP - Splunk - Metrics and events via Splunk HTTP Event Collector - PagerDuty - Incident generation based on failed jobs via PagerDuty V2 Events API - Log - Configurable flat-file or stdout logging of metrics and events Crabby has three types of probes for measuring website performance: - simple uses Go's built-innet/http client to conduct HTTP requests. These requests measure server performance metrics including TLS negotiation time for HTTPS. This probe fetches the full response for a URL but will not fetch objects referenced by that page. Thesimple probe is appropriate for measuring app/API availability, time-to-first-byte (TTFB), DNS lookups, and TCP connection time. Being headless, it cannot measure DOM rendering time. - browser uses chromedp (Chrome DevTools Protocol) to conduct browser-based performance tests via a headless Chrome instance. Thebrowser probe is appropriate when page render time measurement is the primary concern. These tests pull down the page along with all objects included in the page. - api allows you to define multi-step API workflows with template-based response chaining. Each step can reference values from previous steps' responses using{{ step_name.key.nested_key }} syntax. This is useful for testing authenticated API flows where a token from one request must be passed to subsequent requests. Crabby has first-class support for Prometheus. Crabby exposes a Prometheus endpoint that provides all gathered metrics. The config.yaml in the examples directory will get you started. Crabby applies global and per-job tags (if configured) as labels. Crabby can send metrics via the DogStatsD protocol, compatible with the Datadog Agent and other DogStatsD-compatible collectors. Crabby can send metrics to InfluxDB over HTTP/HTTPS using the InfluxDB v2 client API. Crabby can talk to InfluxDB directly or you can stand up a Telegraf instance to relay the metrics to InfluxDB (or some other datastore). Crabby supports sending metrics and events to Splunk via HTTP Event Collector. The config.yaml in the examples directory will get you started. The Splunk storage backend supports specifying the host , source and sourceType values for events and also to which index to append them. Crabby can generate incidents based on failed jobs. Using PagerDuty's V2 Events API, Crabby will generate an incident for jobs that result in a 4xx or 5xx response code. See CONFIGURATION.md for details on how to configure this storage backend. Crabby includes a configurable logging backend that can write metrics and events to stdout, stderr, or a file with customizable format strings and timestamps. Crabby is configured by a YAML file that you pass via the -config flag. If you don't pass the -config flag, Crabby looks for a config.yaml by default. This config file defines the sites to be tested (called "jobs"), as well as the metric storage destination(s) for the metrics that are generated. Crabby supports multiple metric storage backends simultaneously so you could, for example, send metrics to InfluxDB while simultaneously making them available via the Prometheus endpoint. For metrics storage backends that support them, Crabby supports tags, both global tags applied to all jobs and their metrics, and per-job tags that are applied to a single job's metrics. In the event of tag name conflicts, per-job tags will override global tags. Crabby can load secrets (API tokens, routing keys) from files instead of inlining them in config, which is useful for Kubernetes Secrets or Docker Secrets. Use the token-file or routing-key-file config fields to point to a mounted secret file. Crabby also sets a configurable User-Agent header on all HTTP requests. The recommended way to deploy Crabby is with the included Helm chart. Crabby includes a Helm chart for deploying to Kubernetes clusters. helm install crabby ./helm/crabby -f my-values.yaml | Key | Type | Default | Description | |---|---|---|---| replicaCount | int | 1 | Number of replicas | image.repository | string | ghcr.io/chrissnell/crabby | Container image repository | image.pullPolicy | string | IfNotPresent | Image pull policy | image.tag | string | "" | Image tag (defaults to chart appVersion)