Samsung, SK hynix morph into AI foundries as Big Tech reshapes chipmaking - 네이트

[AI] Samsung | | 🖥️ 하드웨어
#하드웨어/반도체 #hbm4 #sk하이닉스 #반도체 #삼성전자
원문 출처: [AI] Samsung · Genesis Park에서 요약 및 분석

요약

AI 시대를 맞아 빅테크 기업들이 표준화된 메모리 대신 자체 설계에 맞춘 최적화 솔루션을 요구하면서 삼성전자와 SK하이닉스가 생산 방식을 대폭 수정하고 있습니다. SK하이닉스는 TSMC와 협력해 HBM4 로직 디 개발을 외주 맡기며 2026년까지 40조 원을 첨단 패키징에 투자하는 반면, 삼성전자는 메모리, 파운드리, 패키징을 모두 보유한 IDM 강점을 앞세워 맞대응하고 있습니다.

본문

As AI workloads grow more specialized, Big Tech clients are no longer buying standardized memory at scale but demanding tightly integrated, tailor-made solutions such as High Bandwidth Memory (HBM) co-designed with their own architectures — forcing Korea’s two largest chipmakers to rethink their traditional mass-production model. Companies like Nvidia and Tesla are driving this shift, pushing suppliers to align memory, logic and packaging into unified, customized stacks. In response, SK hynix and Samsung are increasingly operating less like commodity fabs and more like contract foundries for a handful of deep-pocketed clients. SK hynix has effectively become a dedicated memory partner to Nvidia, dominating supply of HBM3 and HBM3E used in the latter’s AI accelerators. To extend that lead into next-generation HBM4, SK hynix has forged closer ties with TSMC, outsourcing logic foundry work for base dies to leverage TSMC’s 12-nanometer and upcoming 3-nanometer processes. The strategy marks a departure from traditional vertically integrated expansion. Instead of pouring capital into logic fabrication, SK hynix is channeling its estimated 40 trillion won ($29 billion) 2026 capital expenditure into advanced packaging and custom HBM design — prioritizing speed, yield stability and alignment with Nvidia’s roadmap. Samsung, by contrast, is leaning into its identity as a full-stack Integrated Device Manufacturer (IDM), combining memory, foundry and advanced packaging under one roof. It is pitching a “turnkey” model to clients seeking to bypass Nvidia’s ecosystem and build proprietary AI chips. Alongside supplying HBM and foundry services to AMD, Samsung is expanding cooperation with Tesla on next-generation self-driving chips (HW 5.0) and custom memory, while working with AI chip startups such as Tenstorrent and Naver. To support the shift, Samsung plans to invest about 40.9 trillion won in its Device Solutions division this year, aiming to more than triple its custom HBM capacity. Industry experts say the transition from standardized memory production to client-specific design will define the next phase of the AI supercycle. “The market so far has been dictated by the Nvidia–TSMC–SK hynix axis because Nvidia required specific memory components for its GPUs,” said Kim Duk-ki, a professor of semiconductor engineering at Sejong University. “But as other Big Tech players like Tesla and Intel demand entirely new AI architectures, foundry demand is surging.” Kim added that while the current AI boom could run for another two years before facing structural constraints such as data center energy limits, Samsung’s breadth offers a strategic hedge. “Samsung is crucial because it has everything — from memory to foundry,” he said. “Its ability to shift capacity across DRAM, NAND and custom foundry services positions it to adapt to a future where Big Tech dictates increasingly diverse, bespoke chip designs.” Candice Kim 기자 [email protected] ★관련기사 - Copyright ⓒ [아주경제 ajunews.com] 무단전재 배포금지 -

Genesis Park 편집팀이 AI를 활용하여 작성한 분석입니다. 원문은 출처 링크를 통해 확인할 수 있습니다.

공유

관련 저널 읽기

전체 보기 →