🤖 AI Summary
Samsung announced plans to build an "AI Megafactory" powered by a cluster of 50,000 Nvidia GPUs to automate and accelerate chip manufacturing for its mobile devices and robotics lines. The company said it will deploy Nvidia’s hardware and Omniverse simulation software to tune its lithography and production flows, and will run its own on-device AI models on the cluster. Samsung also confirmed a joint engineering effort with Nvidia to adapt Samsung’s lithography platform to GPU-accelerated workflows and to tweak its fourth‑generation HBM memory for AI workloads. No construction timeline was provided.
The deal underscores Nvidia’s central role in industrial AI and deepens its partnership with a major chip foundry/supplier—Samsung already makes high‑bandwidth memory used in Nvidia accelerators. Nvidia representatives claim the lithography-GPU integration could deliver up to 20× performance improvements in key workflows, highlighting how large-scale GPU farms plus physics-based simulation (Omniverse) can speed yields, shorten design cycles, and enable tighter vertical integration between memory, chip design and AI software. The move also signals broader Korean momentum—other groups are deploying similar GPU capex—and adds weight to Nvidia’s Blackwell/Rubin-era demand outlook and its strategic footprint in semiconductor manufacturing.
Loading comments...
login to comment
loading comments...
no comments yet