SK hynix begins mass production of AI memory SOCAMM2 - 네이트
[AI] ai memory architecture
|
|
{'이벤트': '📰', '머신러닝/연구': '📰', '하드웨어/반도체': '📰', '취약점/보안': '📰', '기타 AI': '📰', 'AI 딜': '📰', 'AI 모델': '📰', 'AI 서비스': '📰', 'discount': '📰', 'news': '📰', 'review': '📰', 'tip': '📰'} 하드웨어/반도체
#dram
#fms 2025
#nand
#sk하이닉스
#하드웨어/반도체
요약
SK하이닉스가 차세대 AI 메모리 제품인 SOCAMM2의 양산을 단계적으로 개시했다고 밝혔다. 해당 제품은 기존 대신 대역폭과 용량을 획기적으로 개선해 AI 가속기의 성능을 한층 높일 것으로 기대된다. 이번 양산은 SK하이닉스가 글로벌 AI 반도체 시장에서 입지를 강화하는 중요한 계기가 될 전망이다.
왜 중요한가
관련 엔티티
SK hynix
SK하이닉스
SOCAMM2
본문
| | SK hynix’s 1c-nanometer LPDDR5X-based SOCAMM2 memory module. | SK hynix has begun mass production of its next-generation SOCAMM2 192GB memory module, optimized for NVIDIA’s upcoming AI server platform, Vera Rubin. SOCAMM2 is a new type of memory module that adapts low-power DRAM--traditionally used in mobile devices--for server environments. Based on LPDDR5X, it delivers high-speed data processing while significantly reducing power consumption, making it suitable for AI data centers. The company said its SOCAMM2, built on a 1c-nanometer (sixth-generation 10nm-class) process, achieves more than twice the bandwidth of conventional server DRAM modules while improving energy efficiency by over 75%. The advanced process enables higher density, allowing more DRAM cells to be packed into the same area, boosting both capacity and performance while lowering power usage. SOCAMM2 is designed to fill the gap between high-bandwidth memory (HBM) and conventional server DRAM. While HBM offers high speed, it comes with limitations in capacity, power consumption, and heat. Traditional DRAM, on the other hand, provides large capacity but lower speed and efficiency. SOCAMM2 combines the strengths of both, offering high bandwidth with lower power consumption. The module is specifically optimized for NVIDIA’s Vera Rubin architecture, which connects GPU HBM4 and CPU system memory via NVLink-C2C technology, effectively allowing them to function as a unified memory pool. When GPU memory is fully utilized, workloads can be offloaded to SOCAMM2, improving overall efficiency. This architecture helps reduce “memory bottlenecks,” where slower data transfer between memory and GPU limits overall AI processing speed. By enabling faster data exchange closer to the GPU, SOCAMM2 enhances system performance, particularly for large language model inference and multi-model workloads. SK hynix expects strong demand as the AI market shifts from training to inference, where power-efficient memory becomes increasingly critical. Kim Joo-sun, Chief Marketing Officer and Head of AI Infrastructure at SK hynix, said, “With the SOCAMM2 192GB product, we have set a new benchmark for AI memory performance. We will continue to strengthen partnerships with global AI customers and establish ourselves as the most trusted AI memory solution provider.” [인기기사]▶ "충격 많이 받아" 치어리더 정가예 돌연 활동 중단▶ 문채원, 6월의 신부 된다…예비신랑은 비연예인▶ "유재석이 왜 백상 후보가 아니냐"…팬들 뿔났다▶ "XX대학교" 13세 성추행 과외교사 신상 다 털렸다▶ "가족까지 건드려?"…참다못한 추신수, 결국 47명 고소이지선 [email protected]