Jevons or Bust (www.a16z.news)

🤖 AI Summary
AI usage is exploding in ways that look a lot like Jevons Paradox: as models and compute get cheaper and more efficient, consumption soars. Public and third‑party data show token throughput jumping from hundreds of trillions to quadrillions: Google reported monthly token use rising from ~480 trillion to ~1.3 quadrillion in months, while OpenRouter’s opt‑in footprint went from ~300 billion weekly tokens to just under 6 trillion (~19× YoY). Token prices have fallen too—$/million‑tokens is roughly one‑third of what it was in February—even as paid consumption has quintupled. New entrants can push sudden step‑ups (xAI captured ~60% of OpenRouter’s code‑gen tokens almost overnight), and six of the ten fastest‑growing open‑source projects on GitHub are AI‑focused. That combination matters for AI/ML strategy and infrastructure: falling per‑token costs plus rising demand could justify the massive GPU and cloud capex buildout if consumption endures, but it also raises boom‑vs‑bust risk. Analysts warn of a shale‑like outcome where expensive hardware stops producing needed returns if demand plateaus. Early signs—growing incremental cloud revenue from AI (roughly 1%→5% for big providers) and broad developer adoption—support the Jevons view, yet uncertainty remains about long‑term, non‑speculative use cases. Crucially, future demand may be unlocked by agents and developer tooling themselves, meaning LLMs might catalyze new, hard‑to‑predict applications that sustain consumption.
Loading comments...
loading comments...