🤖 AI Summary
Nvidia this week announced two deals that further cement its dominance over AI datacenter infrastructure: a $5 billion purchase of Intel stock tied to a partnership to integrate Nvidia NVLink Fusion chiplets and RTX GPU chiplets into Intel Xeon CPUs and hybrid Intel-Nvidia SoC designs; and a pledge to make up to $100 billion available to OpenAI to cover at least 10 gigawatts of Nvidia-based AI compute as datacenters come online. The Intel tie-up means NVLink — previously limited to Nvidia Grace and legacy IBM Power systems — will be available in Xeon hosts, letting cloud builders and hyperscalers use Intel CPUs as NVLink-attached hosts and simplifying choices for custom XPUs. Intel gets RTX chiplets and potential foundry customer momentum; Nvidia gets broader platform reach and another avenue to lock in customers.
Technically, Nvidia signals continuation of its chiplet and rackscale strategy: future NVL144 systems pair an 88-core Vera Arm server CPU with Rubin GPU complexes that pack multiple GPU chiplets per die (like Blackwell), and first gigawatt of Vera‑Rubin capacity is expected in H2 2026. The $100B commitment will be allocated incrementally as power/cooling-enabled datacenter capacity is ready, and the scale involved (roughly tens of thousands of high‑power racks for 10 GW) highlights the energy, cooling and supply-chain implications. Together these moves deepen Nvidia’s platform lock-in, reshape CPU–GPU co-design economics, pressure AMD and Intel accelerator roadmaps, and give Nvidia outsized influence over where and how GenAI runs.
Loading comments...
login to comment
loading comments...
no comments yet