Are we repeating the telecoms crash with AI datacenters? (martinalderson.com)

🤖 AI Summary
The piece argues that while the AI datacenter buildout is often likened to the 2000s telecoms crash, the economics and technology dynamics differ in crucial ways. The telecom bust was driven by massive, debt‑financed overbuild (roughly $2T spent 1995–2000, ~$4T inflation‑adjusted), huge demand overestimates (CEOs claimed traffic doubled every 3–4 months vs ~12 months real growth) and exponential supply improvements (WDM and optics increased fibre capacity by orders of magnitude), leaving ~95% of fibre dark. By contrast, AI infrastructure faces slowing hardware efficiency gains, rising power draw (GPU TDPs: V100 300W → A100 400W → H100 700W → B200 1,000–1,200W, with B200 requiring liquid cooling), and semiconductor physics limits that preserve the value of existing hardware rather than make it obsolete. Crucially, demand looks very different: agent-driven use cases can multiply token consumption per user 10–100× (coding agents can consume 150k+ tokens per session), and many providers already hit peak‑time capacity constraints. Still, long datacenter lead times (2–3 years) and GPU lead times (6–12 months) create a prisoner’s dilemma that incentivizes overbuilding to avoid losing market share. Short‑term corrections remain possible (agent adoption stalls, financing shock, or an unexpected efficiency breakthrough), but unlike telecoms, excess GPU capacity is more likely to be underutilized for a time rather than permanently “dark,” making the analogy imperfect though the risk of a costly correction is real.
Loading comments...
loading comments...