🤖 AI Summary
Nvidia and OpenAI disclosed a landmark strategic letter of intent: Nvidia plans to deploy at least 10 gigawatts of NVIDIA systems for OpenAI’s next-generation training infrastructure and to invest up to $100 billion in OpenAI as those systems are rolled out (an initial $10 billion phase targeted for H2 2026). Huang said the build could include as many as 5 million Nvidia chips—roughly the company’s annual shipment—and will use Nvidia’s Vera Rubin platform. OpenAI simultaneously expanded its Stargate data‑center program with Oracle and SoftBank, adding five more sites and reiterating ambitions to hit roughly 10 GW (and hundreds of billions in capex) across multiple locations; some coverage characterizes the overall national buildout as approaching a trillion-dollar scale. The deal is not finalized.
Why it matters: this materially shifts the compute ceiling for training and inference—10 GW clusters could enable models tens to hundreds of times larger than recent generations, turning compute from a hard physical limit into an economic one and accelerating the race toward much more capable (and riskier) models. Technical and market implications include massive demand for Nvidia GPUs, potential round‑tripping concerns (New Street estimates OpenAI will buy ~$35 in chips for every $10 Nvidia invests), dilution of Microsoft’s earlier role, and heightened governance stakes around resource allocation and AGI strategy. Timelines, exact spend cadence, and long‑term risks (market concentration, circular financing, and societal tradeoffs over compute priorities) remain unresolved.
Loading comments...
login to comment
loading comments...
no comments yet