Field of GPUs (www.nextplatform.com)

🤖 AI Summary
Nvidia and neocloud CoreWeave quietly updated a master services agreement — disclosed in an SEC 8-K — under which Nvidia will guarantee up to $6.3 billion of GPU compute spending through 2032 if CoreWeave can’t find customers. That averages about $900 million a year; at roughly $10.50/hr per GB200 “Blackwell” GPU, it equates to ~9,387 GPUs running continuously for a year. Nvidia already holds roughly a 7% stake in CoreWeave (worth ≈$4.1B today) after multiple funding rounds and secondary sales, and has previously supported CoreWeave’s IPO. CoreWeave itself has become a major supplier for model training workloads (including large rentals from OpenAI), positioning it as an alternative capacity source to AWS, Azure and Google Cloud. The deal is significant because it creates a financial backstop that stabilizes supply and demand for hyperscale GPU capacity during the volatile GenAI boom. For the AI/ML ecosystem this means more resilient access to training and inference capacity, predictable pricing for spare capacity, and another pathway for large model builders to source GPUs outside the big clouds. Technically, Nvidia can also absorb or redirect capacity without building new datacenters, using CoreWeave’s pools for its own model tuning or chip design work. Given CoreWeave’s market rally and Nvidia’s profitable GPU sales, analysts expect Nvidia to rarely need to actually fund the full guarantee — but the commitment materially de-risks capacity for customers and investors alike.
Loading comments...
loading comments...