HN 표시: Keeper - Go용 내장 비밀 저장소(해제할 수 있도록 도와주세요)
hackernews
|
|
📦 오픈소스
#argon2id
#go
#보안
#암호화
#취약점
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
Go용 임베디드 비밀 관리 저장소인 'Keeper'는 Argon2id 키 파생과 XChaCha20-Poly1305 암호화를 사용하여 데이터를 보호하는 bbolt 데이터베이스 기반의 도구입니다. 이 라이브러리는 4단계의 보안 정책, HSM 및 원격 KMS 통합, 위조 방지 감사 체인을 제공하며, Go 라이브러리, HTTP 핸들러, CLI 등 세 가지 형태로 독립적으로 사용할 수 있습니다. 또한 개발자는 마스터 키 로테이션, 암호화된 메타데이터, 그리고 무결성 검증 기능을 통해 프로세스 내에 견고한 보안 계층을 직접 구축할 수 있습니다.
본문
Keeper is a cryptographic secret store for Go. It encrypts arbitrary byte payloads at rest using Argon2id key derivation and XChaCha20-Poly1305 (default) authenticated encryption, and stores them in an embedded bbolt database. It ships as three things you can use independently: - A Go library — embed a hardened secret store directly in your process, with four security levels, per-bucket DEK isolation, and a tamper-evident audit chain. - An HTTP handler ( x/keephandler ) — mount keeper endpoints on anynet/http mux in one call, with pluggable hooks, guards, and response encoders for access control and audit logging. - A CLI ( cmd/keeper ) — a terminal interface with a persistent REPL session, no-echo secret entry, and zero shell-history exposure. Keeper was designed as the foundational secret management layer for the Agbero load balancer but has no dependency on Agbero and works in any Go project. - Security model - Cryptographic design - Key hierarchy - Storage schema - Audit chain - Jack integration - x/keepcmd — reusable CLI operations - x/keephandler — HTTP handler - API reference - Error catalogue - Security decisions - Dependencies Keeper partitions secrets into buckets. Every bucket has an immutable BucketSecurityPolicy that governs how its Data Encryption Key (DEK) is protected. Four levels are available. The URI scheme (vault:// , certs:// , space:// , or any name you register) is independent of the security level. A scheme is just a namespace prefix that groups related buckets. The security level is a property of the BucketSecurityPolicy set at CreateBucket time and cannot be changed afterwards. You can mix security levels freely within the same scheme. The bucket DEK is derived from the master key using HKDF-SHA256 with a domain-separated info string per bucket (keeper-bucket-dek-v1:scheme:namespace ). All LevelPasswordOnly buckets are unlocked automatically when UnlockDatabase is called with the correct master passphrase. No per-bucket credential is required at runtime. This level is appropriate for secrets the process needs at startup without human interaction. The bucket has a randomly generated 32-byte DEK unique to that bucket. The DEK is never stored in plaintext. For each authorised admin a Key Encryption Key (KEK) is derived from HKDF(masterKey‖adminCred, dekSalt) and used to wrap the DEK via XChaCha20-Poly1305. The bucket is inaccessible until an admin calls UnlockBucket with their credential. The master passphrase alone cannot decrypt the bucket. Revoking one admin does not affect any other admin's wrapped copy. The bucket DEK is generated at CreateBucket time and immediately wrapped by a caller-supplied HSMProvider . The provider performs the wrap and unwrap operations — keeper never handles the raw DEK after handing it to the provider. UnlockDatabase automatically calls the provider to unwrap and seed the Envelope for all registered HSM buckets. Master key rotation does not re-encrypt these buckets; the DEK is provider-controlled. A built-in SoftHSM implementation backed by a memguard-protected wrapping key is available in pkg/hsm for testing and CI environments. Do not use it in production. Identical to LevelHSM in key management behaviour, but the HSMProvider is implemented by pkg/remote.Provider — a configurable HTTPS adapter that delegates wrap and unwrap to any remote KMS service over TLS. Pre-built configurations for HashiCorp Vault Transit, AWS KMS, and GCP Cloud KMS are provided in pkg/remote . For production use, configure TLSClientCert and TLSClientKey to enable mutual TLS authentication. salt ← random 32 bytes, generated once, stored as a versioned SaltStore (unencrypted) masterKey ← Argon2id(passphrase, salt, t=3, m=64 MiB, p=4) → 32 bytes A verification hash is stored on first derivation: verifyHash ← Argon2id(masterKey, "verification", t=1, m=64 MiB, p=4) → 32 bytes Subsequent DeriveMaster calls recompute this hash and compare it with crypto/subtle.ConstantTimeCompare . A mismatch returns ErrInvalidPassphrase . The KDF salt is stored unencrypted by design. It must be readable before UnlockDatabase to derive the master key — encrypting it with a key derived from the master would be circular. A KDF salt is not a secret; its purpose is uniqueness, not confidentiality. Each plaintext value is encrypted with XChaCha20-Poly1305 using the bucket DEK: nonce ← random 24 bytes ciphertext ← XChaCha20-Poly1305.Seal(nonce, DEK, plaintext) The stored record is a msgpack-encoded Secret struct containing the ciphertext, encrypted metadata, and schema version. Authentication is implicit: a ciphertext decrypted with the wrong key produces an AEAD authentication failure before any plaintext is returned. salt ← random 32 bytes, generated at bucket creation, stored in policy ikm ← masterKey ‖ adminCredential KEK ← HKDF-SHA256(ikm, salt, info="keeper-kek-v1") → 32 bytes wrappedDEK ← XChaCha20-Poly1305.Seal(nonce, KEK, DEK) The KEK is derived using HKDF rather than a second Argon2 pass. The master key was already produced by a high-cost KDF; a second Argon2 invocation would add hundreds of milliseconds of latency to every UnlockBucket call with no security benefit. HKDF-SHA256 operates in approximately one microsecond. The neither-alone property holds: an attacker who compromises only the database obtains the wrapped DEK and the HKDF salt but cannot derive the KEK without the master key. An attacker who compromises only the master key cannot unwrap any LevelAdminWrapped DEK without also knowing the admin credential. Secret metadata (creation time, update time, access count, version) is encrypted separately from the ciphertext: metaKey ← HKDF-SHA256(bucketDEK, nil, info="keeper-metadata-v1") → 32 bytes encryptedMeta ← XChaCha20-Poly1305.Seal(nonce, metaKey, msgpack(metadata)) For LevelAdminWrapped , LevelHSM , and LevelRemote buckets this means metadata is inaccessible without the bucket credential, preventing an attacker with read access to the database file from learning access patterns or timestamps. All structural metadata is also encrypted at rest. Two keys are derived from the master key at UnlockDatabase time: policyEncKey ← HKDF-SHA256(masterKey, nil, info="keeper-policy-enc-v1") → 32 bytes auditEncKey ← HKDF-SHA256(masterKey, nil, info="keeper-audit-enc-v1") → 32 bytes policyEncKey encrypts: BucketSecurityPolicy values and the rotation WAL. auditEncKey encrypts: the Scheme , Namespace , and Details fields of every audit event. Both keys are cleared from memory at Lock() . The cipher used for metadata encryption is the same configurable crypt.Cipher interface used for secrets — the user's cipher choice (AES-256-GCM for FIPS, XChaCha20-Poly1305 by default) flows through automatically. Wire format for all encrypted metadata blobs: nonce (cipher.NonceSize() bytes) || AEAD-ciphertext On-disk policy keys are opaque hashes rather than plaintext scheme:namespace strings, preventing offline enumeration of bucket names: base ← hex(SHA-256("scheme:namespace"))[:32] // 32 hex chars = 128-bit key space _policies/ → encrypted BucketSecurityPolicy _policies/__hash__ → SHA-256(encrypted policy bytes) _policies/__hmac__ → HMAC-SHA256(policyKey, encrypted policy bytes) The in-memory schemeRegistry continues to use "scheme:namespace" as its key — only the on-disk representation changes. Each policy record carries two integrity tags written atomically in one bbolt transaction: hash ← SHA-256(encryptedPolicyBytes) — unauthenticated, pre-unlock integrity policyKey ← HKDF-SHA256(masterKey, nil, info="keeper-policy-hmac-v1") → 32 bytes hmac ← HMAC-SHA256(policyKey, encryptedPolicyBytes) — authenticated, post-unlock integrity Before UnlockDatabase , only the SHA-256 hash is available. After unlock, loadPolicy verifies the HMAC tag. UnlockDatabase calls upgradePolicyHMACs to backfill HMAC tags on policies created before this feature existed. auditKey ← HKDF-SHA256(masterKey, nil, info="keeper-audit-hmac-v1") → 32 bytes HMAC ← HMAC-SHA256(auditKey, event fields including Seq) The signing key is activated at UnlockDatabase and cleared at Lock . When the master key is rotated, Rotate appends a key-rotation checkpoint event to every active audit chain, signed with the old audit key as the final event of the old epoch. History is never rewritten; the checkpoint is the trust bridge between epochs. passphrase │ └─ Argon2id(salt) ──→ masterKey (32 bytes, memguard Enclave) │ ├─ HKDF("keeper-audit-hmac-v1") ──→ auditKey (HMAC signing) ├─ HKDF("keeper-audit-enc-v1") ──→ auditEncKey (audit field encryption) ├─ HKDF("keeper-policy-hmac-v1") ──→ policyKey (policy HMAC) ├─ HKDF("keeper-policy-enc-v1") ──→ policyEncKey (policy/WAL encryption) │ ├─ [LevelPasswordOnly] │ └─ HKDF("keeper-bucket-dek-v1:scheme:ns") ──→ DEK │ └─ HKDF("keeper-metadata-v1") ──→ metaKey │ ├─ [LevelAdminWrapped] │ ├─ random 32 bytes ──→ DEK │ │ └─ HKDF("keeper-metadata-v1") ──→ metaKey │ │ │ └─ HKDF("keeper-kek-v1", masterKey‖adminCred, dekSalt) │ └─ KEK │ └─ XChaCha20-Poly1305(KEK, DEK) ──→ wrappedDEK │ └─ [LevelHSM / LevelRemote] ├─ random 32 bytes ──→ DEK │ └─ HKDF("keeper-metadata-v1") ──→ metaKey │ └─ HSMProvider.WrapDEK(DEK) ──→ wrappedDEK (stored; provider controls the wrapping key) All intermediate keys are zeroed immediately after use. The master key is never written to disk in any form. The underlying database is bbolt. All buckets and their contents: | bbolt bucket | Key | Value | |---|---|---| __meta__ | salt | msgpack — SaltStore (unencrypted; circular dependency if encrypted) | __meta__ | verify | raw bytes — Argon2id verification hash | __meta__ | rotation_wal | nonce‖AEAD(msgpack(RotationWAL)) | __meta__ | bucket_dek_done | "1" — DEK migration completion marker | __policies__ | hex(SHA-256(scheme:ns))[:32] | nonce‖AEAD(msgpack(BucketSecurityPolicy)) | __policies__ | __hash__ | hex SHA-256 of encrypted policy bytes | __policies__ | __hmac__ | hex HMAC-SHA256(policyKey, encrypted policy bytes) | __audit__/scheme/namespace | event UUID | JSON — audit Event | __audit__/scheme/namespace | __chain_index__ | JSON — chainIndex | scheme/namespace | key string | msgpack — Secret struct | type Secret struct { Ciphertext []byte `msgpack:"ct"` EncryptedMeta []byte `msgpack:"em,omitempty"` SchemaVersion int `msgpack:"sv"` // always 1 } The Event struct uses separate plaintext routing fields (Scheme , Namespace ) alongside encrypted payload fields (EncScheme , EncNamespace , EncDetails ). Checksums are computed over the plaintext routing fields and the encrypted EncDetails bytes, so chain integrity can be verified at three tiers without any key: | Tier | Has | Can verify | |---|---|---| | Public | Nothing | SHA-256 checksum chain (detects tampering and insertion) | | Audit-key holder | auditEncKey | Full chain + decrypt Scheme/Namespace/Details | | Operator | Master passphrase | Everything | The KDF salt is stored as a msgpack-encoded SaltStore under the salt metadata key. Each salt rotation appends a new SaltEntry and advances CurrentVersion . Old entries are retained as an audit trail. The SaltStore is stored unencrypted — see Security decisions. Rotate writes a WAL before touching any record. The WAL carries WrappedOldKey : the pre-rotation master key encrypted with the new master key. After a crash the old passphrase is gone; WrappedOldKey is the only correct way to carry the old key across the boundary. At UnlockDatabase , when a WAL is present, the new master key decrypts WrappedOldKey and rotation resumes from the WAL cursor. The WAL itself is encrypted with policyEncKey . Every significant operation appends a tamper-evident event to the bucket's audit chain. Chain integrity depends on two mechanisms. Checksum. SHA-256 over prevChecksum, ID, BucketID, Scheme, Namespace, EncDetails, EventType, and Timestamp. Using Scheme /Namespace as plaintext (always preserved alongside the encrypted forms) ensures the checksum is stable across load paths. EncDetails provides integrity over the encrypted payload. HMAC. HMAC-SHA256 over all fields including Seq. An attacker who can write to the database but does not know the audit key cannot produce a valid HMAC. VerifyIntegrity checks both layers for every event. Key rotation epoch boundary. At Rotate , a checkpoint event is appended to every active chain carrying fingerprints of both the outgoing and incoming audit keys. The checkpoint is signed with the outgoing key. Auditors holding any epoch key can recover subsequent epoch keys from the wrapped_new_key field and verify HMAC continuity across the full chain. Automatic pruning. When AuditPruneInterval is set in Config , a jack.Scheduler runs periodically and calls PruneEvents on every registered bucket. LevelHSM and LevelRemote buckets are never pruned regardless of this setting. Jack is an optional process supervision library. When a JackConfig is provided via WithJack , keeper activates background components automatically: auto-lock Looper, per-bucket DEK Reaper, health monitoring patients (bbolt read latency + encrypt/decrypt round-trip), audit prune scheduler, and async event Pool. Keeper never calls pool.Shutdown — the pool lifecycle belongs to the caller. x/keepcmd provides reusable keeper operations decoupled from any CLI framework. Embed it in your own application to get typed, testable secret management without pulling in the CLI binary. import "github.com/agberohq/keeper/x/keepcmd" cmds := &keepcmd.Commands{ Store: func() (*keeper.Keeper, error) { return security.KeeperOpen(cfg) // your own config }, Out: keepcmd.PlainOutput{}, NoClose: false, // true in REPL / session contexts } cmds.List() // all keys: scheme://namespace/key cmds.List("vault") // all keys in scheme vault cmds.List("vault", "system") // all keys in vault://system cmds.Get("vault://system/jwt_secret") cmds.Set("vault://system/jwt_secret", "newsecret", keepcmd.SetOptions{}) cmds.Rotate(newPassphraseBytes) // caller resolved the passphrase — no prompter dependency cmds.RotateSalt(currentPassBytes) // same keepcmd never calls prompter or reads from stdin. Passphrase resolution is entirely the caller's responsibility — this keeps the package safe in headless server contexts. NoClose: true prevents Commands from calling store.Close() after each operation. Use this in REPL / session contexts where one store is shared across many calls. x/keephandler mounts keeper HTTP endpoints on any net/http mux. No external router dependency — it uses Go 1.22+ method+pattern routing with stdlib http.ServeMux . import "github.com/agberohq/keeper/x/keephandler" keephandler.Mount(mux, store, keephandler.WithPrefix("/api/keeper"), keephandler.WithGuard(func(w http.ResponseWriter, r *http.Request, route string) bool { if !acl.Allow(r.Header.Get("X-Principal"), route) { http.Error(w, `{"error":"forbidden"}`, http.StatusForbidden) return false } return true }), keephandler.WithHooks( keephandler.Hook{ Route: keephandler.RouteGet, CaptureBody: false, After: func(r *http.Request, status int, _ []byte) { audit.Log(r.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유