🤖 AI Summary
TSMC is moving its 2 nm node into production and a rush of customers—especially high-performance computing (HPC) designers—are lining up. KLA estimates about 15 customers are designing at N2, with roughly 10 focused on HPC and two-thirds of early adopters targeting datacenter workloads. Major names include Nvidia (Rubin Ultra slated for 2027 and potential follow-ons), AMD (Instinct MI450 AI with 288 GB HBM4 and ~18 TB/s bandwidth, targeting 50 PFLOPS FP4), Intel (outsourcing some Nova Lake wafers), and cloud/ASIC players such as Google, Broadcom and reportedly OpenAI. Apple, Qualcomm and MediaTek are also expected to leverage 2 nm for mobile and laptop SoCs, underscoring cross-market demand.
Technically, the move to 2 nm promises higher transistor density and performance per watt, driving tougher performance requirements for chip designs even as node names no longer map directly to physical gate spacing. Timeline shifts matter: TSMC’s A16 (1.6 nm) slipped to 2026, Rubin and other drives have updated ship dates, and Samsung is pressing as a competitor with its own 2 nm fabs and aggressive pricing (reportedly $20K per wafer vs. TSMC’s ~$30K). The result is intensified fab competition, supply-chain jockeying and rapid architectural innovation as datacenter operators and chipmakers chase denser, more power-efficient accelerators for AI training and inference.
Loading comments...
login to comment
loading comments...
no comments yet