🤖 AI Summary
AMD says its next-gen Instinct MI450 accelerators (CDNA 5) will use TSMC’s N2 (2nm-class) node, making them the first AI GPUs from the company built on a cutting-edge process. That move—announced for a second‑half next‑year debut—could give AMD a node-level advantage versus Nvidia’s Rubin GPUs, which are slated for TSMC N3. N2 promises “full node” gains (roughly +10–15% performance at equal power or −25–30% power at equal frequency and ~15% higher transistor density vs N3E) and introduces gate-all-around (GAA) transistors that enable tighter design/technology co‑optimization, potentially letting AMD pack more compute and efficiency into MI450 chiplets.
At the system level AMD plans a Helios rack with 72 MI450s and HBM4: corrected figures indicate ~31 TB total memory and ~1,400 TB/s bandwidth, versus Nvidia’s Rubin-based NVL144 at ~21 TB and 936 TB/s. AMD’s MI450 family will be AI‑tailored (new data formats/instructions) and is expected to ship to early customers like OpenAI in H2 next year, driving significant revenue upside. However, system performance and efficiency remain to be proven: Nvidia’s rack reportedly delivers higher FP4 throughput (NVFP4 ~3,600 PFLOPS vs AMD’s FP4 ~2,900 PFLOPS) and interconnect scale‑up (UALink) and real‑world power characteristics will ultimately determine competitive standing.
Loading comments...
login to comment
loading comments...
no comments yet