🤖 AI Summary
OpenAI’s recent flurry of multi‑billion-dollar pacts with NVIDIA, AMD, Oracle, Samsung, SK and others — deals now estimated by the FT and Bloomberg to top $1 trillion — underscore that the industry race for AI compute is shifting from chips and capital to two overlooked bottlenecks: power and fab capacity. Demand for inference is exploding (Google reported ~500 trillion tokens/month and ~50x YoY growth in May 2025; global tokens may be doubling ~every 3 months), and meeting that requires many gigawatts of new data‑center power. An xAI “Colossus” consumes ~300 MW; adding 25 such centers in a year would need ~7.5 GW (roughly enough energy for three San Francisco–sized cities). NVIDIA and AMD deals explicitly tie financing to GW deployment (NVIDIA up to $100B as each GW comes online; AMD tranches vest from 1–6 GW), signaling that power build‑out — not just GPU supply — is now the KPI.
Equally crucial is fab capacity. TSMC currently dominates cutting‑edge AI chip production (~90% market share) and already accounts for a large slice of national power (projected ~12% of Taiwan’s by end‑2025). Chips make up ~50–80% of an AI data center’s total cost of ownership, but new fabs take 19–38 months to build; Samsung and Intel might fill gaps, but ramp speed and demand guarantees are uncertain. The upshot: whoever secures gigawatts of reliable power and predictable foundry capacity will control the economics and pace of AI deployment. Tracking power consumption and fab commitments will likely be a more informative metric of long‑term AI progress than headline funding rounds alone.
Loading comments...
login to comment
loading comments...
no comments yet