🤖 AI Summary
OpenAI and partners have publicly committed to an enormous buildout of AI data‑center capacity—claims that include up to ~17 GW of compute and recent Stargate announcements of ~7–10 GW—with NVIDIA saying it “intends to invest up to $100 billion” progressively as each gigawatt is deployed (CNBC says an initial $10B tranche at a $500B valuation). Reporting and company statements are inconsistent about locations, who will build or fund sites (Oracle, SoftBank/SB Energy, Vantage, Crusoe, etc.), and whether planned facilities actually exist or have broken ground. Analysts say capital markets don’t currently support the scale being promised and that several publicized deals are conditional or not finalized.
The financial and technical math explains why: industry examples imply ~2.5 years to deliver 1 GW and roughly $32.5 billion per GW when you include infrastructure and GPUs. One GW is estimated at ~333,000 Blackwell‑class GPUs (~$60k each ≈ $20B) plus networking and sites, so unlocking NVIDIA’s $100B would require OpenAI to spend on the order of $325B in capacity. The author estimates OpenAI needs ~$500B to run and hundreds of billions more raised by partners—approaching a trillion dollars over a few years. For the AI/ML community this highlights hard constraints on capital, GPU supply, construction timelines, and investor risk—meaning lofty growth targets could be materially constrained by real‑world economics and logistics.
Loading comments...
login to comment
loading comments...
no comments yet