NVIDIA is investing up to $100 billion in OpenAI to build 10 gigawatts of AI data centers (www.engadget.com)

🤖 AI Summary
NVIDIA announced it will invest up to $100 billion in OpenAI to help build at least 10 gigawatts of dedicated AI data-center capacity running NVIDIA chips and systems. The investment will be disbursed progressively as each gigawatt comes online, with the first phase targeted for the second half of 2026. OpenAI’s buildout is expected to require millions of NVIDIA GPUs and will be anchored on NVIDIA’s next-generation Vera Rubin platform—promised to be a “big, big, huge step up” from current Blackwell-class accelerators—signaling a major hardware refresh for training and inference at extreme scale. Technically and strategically, this is a watershed for the AI compute arms race: 10 GW of capacity materially expands the compute available for next‑generation foundation models, accelerates co-design of hardware and models, and further centralizes model development around NVIDIA’s stack. The deal tightens NVIDIA–OpenAI alignment and shifts cloud economics and supplier dynamics (alongside OpenAI’s other partnerships with Microsoft, Oracle and the Stargate consortium). It also underscores broader moves by NVIDIA into strategic investments and IP licensing (e.g., recent Intel and Enfabrica deals), with implications for GPU supply chains, data‑center power/thermal engineering, and competitive positioning across hyperscalers and chipmakers.
Loading comments...
loading comments...