🤖 AI Summary
SK Hynix announced it has completed development of HBM4 and is prepping high-volume production, a move timed to supply next‑generation datacenter GPUs from Nvidia and AMD slated for 2026. Technically, SK Hynix says it doubled the number of I/O terminals to 2,048 versus HBM3e, effectively doubling per-stack bandwidth and boosting energy efficiency by more than 40%. The company also reports exceeding JEDEC HBM4 targets with a 10 Gb/s operating speed. That performance jump matters because current HBM technologies top out at roughly 36 GB per stack and ~1 TB/s bandwidth per module, limiting aggregate memory bandwidth on today’s accelerators.
The shift to HBM4 unlocks much larger capacity and raw throughput: Nvidia’s Rubin is expected to use 288 GB of HBM4 to reach ~13 TB/s aggregate bandwidth, while AMD’s upcoming MI400 family aims for up to 432 GB and near 20 TB/s. Those increases translate directly into faster model training and larger in-memory models, but also raise power and supply-chain stakes — HBM already dominates GPU power draw (e.g., boosts from 250 W to ~1 kW per GPU observed with higher-capacity stacks). SK Hynix’s announcement also intensifies competition: Micron is sampling 36 GB HBM4 stacks with a 2,048‑bit interface and expects a 2026 ramp, and Samsung is pushing its own HBM4 despite validation delays. Which vendor wins volume contracts will shape GPU performance and supply resilience for the AI industry.
Loading comments...
login to comment
loading comments...
no comments yet