OpenAI's $1T Infrastructure Spend for 2025-2035 (tomtunguz.com)

🤖 AI Summary
OpenAI disclosed plans to spend roughly $1.15 trillion on hardware and cloud infrastructure from 2025–2035, allocating the bulk to Broadcom ($350B), Oracle ($300B), Microsoft ($250B), Nvidia ($100B), AMD ($90B), AWS ($38B) and CoreWeave ($22B). The announced deals include massive capacity commitments — e.g., Broadcom-backed deployment of ~10 GW of custom accelerators, AMD’s 6 GW of Instinct GPUs, Nvidia and Broadcom deployments starting H2 2026, and Oracle’s $60B/year cloud buildout — alongside AWS access to hundreds of thousands of GB200/GB300 GPUs. Modeled spending ramps from ~$6B in 2025 to $173B in 2029 and $295B in 2030, reflecting rapid infrastructure scale-up. Technically and economically this signals frontier-AI operating at hyperscale: new demand for chips, data-center construction, and multi-cloud capacity that will reshape supplier economics and supply chains. The analysis shows implications for OpenAI’s P&L: assuming gross margins rising from 48% (2025) to 70% (2029), implied revenue to support COGS would need to jump from ~$12B (2025) to ~$577B (2029) and ~$983B (2030). Important caveats: accounting treatment matters (chip design costs hit R&D vs manufactured hardware capitalized and depreciated into COGS), contract timing/structures vary, and estimates carry ±30–50% error. Still, even with uncertainty, the commitments reveal an unprecedented capital and compute trajectory for the AI industry.
Loading comments...
loading comments...