HN 표시: Vela Coach – Granola 성적표를 읽는 오픈 소스 코치
hackernews
|
|
📦 오픈소스
#ai 모델
#claude
#gemini
#gpt-5
#openai
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
벨라 코치는 최근 30일간의 그놀라 회의록을 분석해 성격과 소통 스타일에 기반한 인물 분석 보고서를 작성하는 오픈 소스 도구입니다. 사용자는 OpenAI나 Anthropic 등의 자체 API 키를 사용하여 브라우저 로컬 저장소에 데이터를 남기고, 분석 결과가 확실하지 않을 경우 질문을 통해 보고서를 정교화할 수 있습니다. 현재 알파 단계로 제공되며 MIT 라이선스를 따릅니다.
본문
A reading of you, from your last 30 meetings. Vela Coach reads your real Granola meeting transcripts and writes a single, dense character reading of the operator they reveal — situated against academic frameworks for personality, communication, and leadership. The kind of thing a $500/hour coach would write after binging 90 days of your calendar. Live: coach.vela.partners · License: MIT · Status: alpha - Sign in with Granola — OAuth 2.0 + PKCE, no API keys to copy. - Bring your own AI — pick OpenAI, Anthropic, or Google Gemini and paste your own API key once. It lives in your browser's localStorage; we never see it persisted on our side. You pay your provider directly. - Pick meetings — last 30 days by default, or a custom range. - Sharpen the read — up to three quick questions if the model is unsure about an MBTI dimension. Skippable. - Read — a single scrollable page synthesizing: - MBTI (Myers–Briggs) with per-letter confidence + evidence quotes - OCEAN / Big Five (Costa & McCrae 1992; Barrick & Mount 1991) - Communication style — directness, hedging rate, listening ratio, specificity - Leadership style (Goleman 1998) - Situational signature — how you flex across investor / team / customer counterparties - Re-read — analysis can be re-run as you accumulate new meetings; readings auto-save to your browser. - Export — download your reading as HTML (self-contained, mirrors the live UI) or Markdown (LLM-friendly). - Tokens, your AI provider key, founder profile, and readings live in your browser's localStorage. We have no database. The Cloud Run container is stateless. - Transcripts pass through our server in-memory only — Granola → server → your chosen AI provider → your browser. Nothing is written to disk on our infra. Logs hold counts and HTTP status codes; never content, never keys. - Bring your own AI. OpenAI/Anthropic/Gemini billing flows through your account, with your data-use terms — not ours. - No analytics, no cookies, no third-party scripts. /reset wipes everything Coach has written to your device. Full details in PRIVACY.md. git clone https://github.com/vela-engineering/coach.git cd coach npm install npm run dev -- -p 4070 Open http://localhost:4070, sign in with Granola, then pick an AI provider and paste your API key. Get a key from: - OpenAI: https://platform.openai.com/api-keys ( sk-… ) - Anthropic: https://console.anthropic.com/settings/keys ( sk-ant-… ) - Google Gemini: https://aistudio.google.com/apikey ( AIza… ) That's it — no .env file required for local dev. The repo's open-source OAuth client is pre-registered for localhost:4070 . - Framework: Next.js 16 (App Router) + TypeScript - Styling: Tailwind CSS v4 + Framer Motion - AI: any of OpenAI ( gpt-5.5 ), Anthropic (claude-opus-4-7 ), or Google Gemini (gemini-3.1-pro-preview ) via dynamic-imported provider adapters insrc/lib/llm/ - Data: Granola public REST API ( public-api.granola.ai/v1 ) - Auth: Granola MCP OAuth ( mcp-auth.granola.ai ) — RFC 7591 dynamic client registration - Tests: Vitest, 330+ tests over src/__tests__/ src/ ├── app/ │ ├── api/ │ │ ├── auth/token/ # OAuth token exchange proxy (no logs of payload) │ │ ├── character/ # Streaming Character Reading endpoint │ │ ├── chat/ # Coaching follow-up chat │ │ ├── llm/verify/ # One-shot key probe before saving │ │ ├── meetings/ # Granola REST proxy │ │ └── repo-stats/ # GitHub stars (cached server-side) │ ├── auth/callback/ # OAuth redirect handler │ └── reset/ # Wipe-localStorage page (with confirm) ├── components/ │ ├── Onboarding.tsx # Cover → privacy → AI provider → sign in │ ├── ProviderSetup.tsx # Pick provider + paste key + verify │ ├── FounderIntake.tsx # Founder profile (pre-filled on returning) │ ├── SharpenRead.tsx # MBTI follow-up questions │ ├── CharacterReading.tsx # The reading page │ ├── CoachView.tsx # Header + reading shell + export menu │ ├── ConfirmDialog.tsx # Reusable destructive-action modal │ └── ... └── lib/ ├── llm/ # Provider abstraction │ ├── types.ts # LLMClient, LLMProvider, LLMUsage │ ├── index.ts # getLLMClient(provider, key) factory │ ├── gemini.ts # @google/genai adapter │ ├── openai.ts # openai adapter │ └── anthropic.ts # @anthropic-ai/sdk adapter ├── apiKey.ts # Per-provider keys in localStorage ├── character.ts # Analysis engine (provider-agnostic) ├── grounding.ts # Web search (Gemini only — graceful no-op for others) ├── granolaRest.ts # Granola REST client ├── relationshipMetrics.ts ├── exportHtml.ts # Self-contained HTML export ├── sessionFile.ts # Markdown export + parse ├── reset.ts # Single source of truth for "wipe all data" ├── mbti.ts # Pure MBTI helpers ├── phase.ts # Phase state machine ├── founderProfile.ts # localStorage profile └── sessions.ts # localStorage session store npm test # vitest run, 330+ tests npm run test:watch # Register a new Granola OAuth client for your prod redirect URI: curl -sS -X POST https://mcp-auth.granola.ai/oauth2/register \ -H "Content-Type: application/json" \ -d '{ "client_name": "Your Coach (production)", "redirect_uris": ["https://your-domain/auth/callback"], "grant_types": ["authorization_code", "refresh_token"], "response_types": ["code"], "token_endpoint_auth_method": "none", "scope": "openid email profile offline_access" }' # → returns { "client_id": "client_xxx" } # Build via Cloud Build with that client_id baked in: gcloud builds submit --config cloudbuild.yaml \ --substitutions=_IMAGE=us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-source-deploy/coach:latest,_NEXT_PUBLIC_GRANOLA_CLIENT_ID=client_xxx # Roll out to Cloud Run — note: NO server-side LLM key needed (BYOK): gcloud run deploy coach \ --image=us-central1-docker.pkg.dev/$PROJECT_ID/cloud-run-source-deploy/coach:latest \ --region=us-central1 --allow-unauthenticated --memory=1Gi --timeout=900 \ --set-env-vars=NEXT_PUBLIC_GRANOLA_CLIENT_ID=client_xxx The client_id is public by design (PKCE OAuth, no client secret). See SECURITY.md for the trust model and PRIVACY.md for what does + doesn't touch the server. Dark, intimate, editorial. Inspired by what a private executive coaching session feels like: - Palette — Void black ( #08080a ) + warm amber candlelight (#d4a04a ) - Typography — Newsreader serif italic for personality, Outfit sans for body - Details — Ambient glow, grain texture, dotted-underline citation links - Voice — Second person, citation-grounded, never hedges when evidence is strong Issues, PRs, and discussions welcome. See CONTRIBUTING.md for setup, conventions, and what we don't accept. For security issues, see SECURITY.md. For conduct, CODE_OF_CONDUCT.md. Coach's prose draws explicitly from a small canon. Each claim in the reading footnotes its source: - Costa & McCrae (1992) — Revised NEO Personality Inventory - Barrick & Mount (1991) — The Big Five Personality Dimensions and Job Performance - McCrae & Costa (1989) — Reinterpreting the Myers-Briggs Type Indicator from the Five-Factor Model - Furnham (1996) — The Big Five vs the Big Four - Goleman (1998) — What Makes a Leader? - Pennebaker (2011) — The Secret Life of Pronouns - Leary (1957) — Interpersonal Diagnosis of Personality - Edmondson (1999) — Psychological Safety and Learning Behavior in Work Teams - Schein (2009) — Helping - Granovetter (1973) — The Strength of Weak Ties MIT — © 2026 Vela Engineering.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유