The AI Bubble's Impossible Promises (www.wheresyoured.at)

🤖 AI Summary
The piece argues we’re in an AI infrastructure bubble driven by jaw‑dropping promises (trillions in spend, gigawatt‑scale data centers) that ignore the brutal realities of power, supply chains and hardware economics. It flags a strange AMD–OpenAI deal tying optioned AMD shares to each gigawatt deployed and a commitment from OpenAI to buy “six gigawatts” of GPUs, while highlighting that OpenAI’s Stargate site in Abilene only has a 200 MW substation and fragile, expensive gas turbines — far short of the ~1.7 GW of grid capacity required to run a 1.2 GW IT load once you account for typical PUE (~1.43). Analysts warn good gas turbines are many years out and that transformers, electrical‑grade steel and other grid components are in short supply, making multi‑gigawatt rollouts slow, costly and environmentally fraught. On the compute side the newsletter calls out systemic risks: GPUs age fast (warranties ~3 years, practical life 3–5 years), NVIDIA ships new generations yearly, and rental prices for H100/A100 hardware have collapsed (from ~$8/hr in 2023 to ~$2/hr or less), undermining debt and SPV models that buy chips today to rent them for years. The result is stranded or rapidly devaluing hardware, huge capital intensity for diminishing returns, and a market narrative that glosses over operational constraints — a cautionary wake‑up for investors, operators and researchers about feasibility versus hype.
Loading comments...
loading comments...