오프라인 맥 번역기(WebRTC+Llama.cpp). 내 C++ 굽기
hackernews
|
|
📦 오픈소스
#c++
#llama
#llama.cpp
#webrtc
#로컬 ai
#오프라인 번역
#하드웨어/반도체
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
이 프로젝트는 로컬 AI 번역기의 실험적인 버전으로, 1.19GB의 적은 RAM 사용량과 지연 시간이 없는 WebRTC VAD를 구현하기 위해 C++ 컴파일 오류와 Metal GPU 의존성 문제를 해결하는 과정을 담고 있습니다. 현재 공개된 V6.1 버전은 설치가 복잡하고 문서화가 덜 되어 있어, 개발자는 대신 사용하기 쉽고 최적화된 안정적인 V6.2 버전을 macOS .dmg 앱으로 패키징하여 배포하고 있습니다.
본문
Welcome to the bleeding edge of local AI translation. 📖 Looking for the detailed documentation? > If you want to see how this project started and the logic behind it, check out the perfectly documented V4.1 Legacy README here. The code currently in this repo is the raw, unpolished V6.1. To achieve the 1.19GB RAM limit and zero-latency WebRTC VAD, I had to fight through 14 continuous C++ compilation errors and Metal GPU dependency hell. To be completely honest with you... I don't even fully remember how I fixed all of them. 😵💫 But I swear to God, I am actively working hard to fix the plumbing right now! 🪠🛠️ (Clean code is coming in the future, promise!) Because of this chaotic state, there is no step-by-step installation tutorial for V6.1. For the brave souls attempting to run the source code, here is your basic survival kit. You will still need to figure out the Metal/C++ library bindings yourself: Plaintext pip install webrtcvad pip install pyaudio pip install numpy CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python whisper-cpp-python (Note: You might also need to fight with ffmpeg and local whisper paths. May the compiler be with you. ⚔️) If you value your weekend and your sanity, I have already compiled, packaged, and optimized the stable V6.2 version into a clean, double-click-to-run macOS .dmg app. 👉 Don't build it. Just run it. Get the ready-to-use .dmg here: > https://liwenchen.gumroad.com/l/nanolingo-beta
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유