Meta는 독점 Muse Spark를 위해 오픈 소스 Llama를 포기했습니다.
hackernews
|
|
📰 뉴스
#ai 모델
#llama
#meta
#muse spark
#슈퍼인텔리전스
원문 출처: hackernews · Genesis Park에서 요약 및 분석
요약
메타는 개방형 라마(LLaMA) 모델 개발을 사실상 중단하고, 새로운 자사 전용 거대 언어 모델인 '뮤즈 스파크(Muse Spark)'로 전략을 수정했습니다. 마크 저커버그 CEO는 경쟁사 대비 성능이 부족했던 기존 모델에 불만을 품고 2025년 설립된 슈퍼인텔리전스 랩스를 통해 이 모델을 처음부터 새롭게 구축했습니다. 또한 회사는 차세대 AI 개발을 위해 경쟁사 인재 영입에 막대한 비용을 투자하는 등 전면적인 개편에 나섰습니다.
본문
Meta abandons open-source Llama for proprietary Muse Spark Meta’s decided its new proprietary AI LLM, Muse Spark, would be much more profitable than the open-source Llama. What are Llama users to do? Back in 2023, Meta released Llama 2.0 and proclaimed it to be open source. Even if Llama wasn’t actually open source, it sounded good to developers. Indeed, in October 2024, Meta founder and CEO Mark Zuckerberg proclaimed, “Open Source AI is the Path Forward.” In March of last year, the company issued a press release celebrating 1 billion downloads of Llama. That was then. This is now. Meta’s pivot to proprietary AI For all practical purposes, Meta has abandoned developing Llama in favor of its new proprietary program, Muse Spark, announced this month. Meta Superintelligence Labs — a new division formed in 2025 after Zuckerberg recruited Scale AI’s Alexandr Wang to jump-start Meta’s AI efforts — developed the model. Zuckerberg was reportedly unhappy with how Llama models lagged behind ChatGPT and Claude, and so Muse Spark was built from scratch with entirely new infrastructure, architecture, and data pipelines. The company also went on a “blockbuster spending spree” to poach AI talent from the competition. Muse Spark is in no way, shape, or form a child of Llama. We don’t know exactly why Meta dropped Llama from its priority list, as the company hasn’t addressed it. When The New Stack asked Meta about Llama and Muse Spark, the company didn’t respond. What we can see, though, from what Meta has said publicly, is that while it states that “current Llama models will continue to be available as open source,” this only confirms that existing models will remain available and says nothing about future development. The expectation within the AI community is that Llama will receive incremental updates and maintenance. You can forget about it receiving Muse Spark’s massive frontier investment. There is no migration path from Llama to Muse Spark because they have fundamentally different deployment models. No clear migration path What does this mean for current Llama users? They’re in trouble. There is no migration path from Llama to Muse Spark because they have fundamentally different deployment models. Llama offers downloadable open weights for self-hosting and fine-tuning, while Muse Spark is cloud-only with no downloadable weights, no self-hosting capability, and currently only private API preview access. Even if Meta keeps its promise to open-source some of its newer models, it’s hard to imagine how Llama users could migrate to these platforms. As Andrew Ng writes in The Batch, his AI newsletter, “The proprietary release has raised concerns among developers, many of whom have built projects on open-weights Llama models.” At the same time, Ng writes, Meta’s shift may help it “to compete for business customers alongside OpenAI, Google, and Anthropic. However, its pivot away from being the leading U.S. champion of open weights is a significant loss for the developer community.” Meta’s move away from being the leading U.S. champion of open weights is a significant loss for the developer community. This isn’t, by the way, a small number of developers. A year ago, Meta reported that Llama had been downloaded 1.2 billion times. Even then, however, developers thought Meta was no longer investing in Llama at the level it needed to be to compete with the leading frontier models of the day. Nevertheless, thousands of companies and an untold number of developers and individuals are still using Llama. What can they do? Three options for developers Llama developers do have some options. These include: - You can continue using existing Llama models, which remain available on major cloud providers. These LLMs, however, will increasingly lag behind their frontier competitors. - Switch to competing open-source models from Mistral, DeepSeek, or Alibaba’s Qwen. - Migrate to proprietary APIs from the major AI providers. Meta’s in-house programmers, for example, had moved to Claude Sonnet long before Muse Spark arrived. The switching costs are substantial. Migrations require rewriting vendor-specific APIs, adapting proprietary training data, and integrating custom tooling and workflows. AI migration is not easy, nor is it cheap. Llama forks fill the gap Perhaps the easiest way forward will be to use one or more of the numerous Llama forks. The most important of these is the Llama inference engine fork llama.cpp. This is a popular C++ inference engine for running Llama models locally. Llama.cpp supports a wide variety of large language models, extending far beyond just Meta’s Llama family. From it, several other significant forks have emerged. Perhaps the most well-known of these is ik_llama.cpp. This is a performance-focused fork that promises to deliver better CPU and hybrid GPU/CPU performance than llama.cpp. There’s also a Rockchip NPU fork, Rkllama. This engine integrates llama.cpp with Rockchip NPU acceleration for embedded systems like the RK3588 chip, designed to work with nearly all standard llama.cpp-compatible models. Lastly, there’s llama-rs, which is a Rust implementation marketed as an “ultra fast fork for local AI.” Finally, there’s OpenLLaMA, which is an Apache-licensed open-source reproduction of Meta’s original LLaMA models. It’s available in 3B, 7B, and 13B parameter versions, all trained on 1 trillion tokens. It comes with weights for PyTorch and JAX. Meta may benefit from this change. Llama users, however, must now find another way forward. Wish them luck. They’re going to need it.
Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.
공유