HN 표시: Gemma 4를 사용하여 로컬에서 지속적으로 실행되는 UX Agent Mac 앱
hackernews
|
|
📦 오픈소스
#ai
#ai 딜
#gemma
#mac 앱
#ux/ui
#로컬 ai
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
오프라인 환경에서 구동되는 무료 로컬 AI 디자인 어드바이저 앱이 공개되었습니다. 이 앱은 사용자의 화면을 실시간으로 캡처하여 즉각적인 UX/UI 피드백을 제공하며, 구동을 위해서는 LM Studio를 통해 구글의 26B 비전 모델인 Gemma 4가 설치되어야 합니다. 클라우드 서버나 별도의 구독료 없이 작동하며, 최소 16,000 토큰의 컨텍스트 길이 설정과 권한 토큰 발급 등의 초기 세팅을 거치면 동일 네트워크 내의 다른 기기에서도 원격으로 접속해 사용할 수 있습니다.
본문
A local, free AI design advisor that captures your screen and gives you real-time UX/UI feedback — powered by Gemma 4 running in LM Studio. No cloud, no subscriptions. git clone https://github.com/your-username/graphicdesigner.git cd graphicdesigner npm install cp .env.example .env Open .env and fill in your values: LM_STUDIO_API_KEY=your-token-here MODEL_API_URL=http://localhost:1234/v1/chat/completions MODEL_NAME=google/gemma-4-26b-a4b npm start Grant Screen Recording permission when prompted on macOS (required for screenshot capture). Download and install LM Studio from lmstudio.ai. Once open, you will see the default chat interface. Click the search/download icon in the left sidebar and search for gemma 4 . Select google/gemma-4-26b-a4b — this is the 26B vision model required for screenshot analysis. Click Download and wait for it to complete. Click the Developer icon in the left sidebar (looks like or a plug). You will see the server status panel. Click Start Server. The status indicator will change to Running and you will see the local URL (e.g. http://localhost:1234 ) appear at the top. Click Load Model (top right) and select Gemma 4 26B Instruct. Make sure to pick the vision-capable variant. The model will appear under Loaded Models once ready. Once loaded, the model appears in the Loaded Models list and the right panel shows its configuration. The server is now ready to accept requests. With the model loaded, open its settings in the right panel and drag the Context Length slider up — set it to at least 16,000 tokens. This ensures the conversation history (previous feedback) fits in memory so the model doesn't repeat itself. Click Server Settings and make sure the following are enabled: - Serve on Local Network — on (required if running on a separate machine) - Allow per-request model overrides — on Note the server port (default: 1234 ). In Server Settings, scroll down and create a new permission token. Give it a name (e.g. design-advisor ) and click Create token. Copy the generated token immediately — it won't be shown again. This is your LM_STUDIO_API_KEY . If LM Studio is running on a different machine on your network, replace localhost with that machine's local IP address (e.g.http://192.168.1.5:1234/v1/chat/completions ) and make sure Serve on Local Network is enabled in Server Settings.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유