OpenAI's inflated valuation, as I understand it (taloranderson.com)

🤖 AI Summary
A thoughtful essay argues that while large language models (LLMs) will keep improving and likely generate trillions in economic value, the sky-high private valuations of labs—OpenAI at ~$500B (Nvidia ~$4.3T cited for context)—are unrealistic unless a lab either invents superintelligence or achieves an unchallenged monopoly. The piece accepts rapid capability growth (not a near-term plateau), highlights the shift from chatbot-style responses to goal-directed agents, and warns that incremental gains become disproportionately valuable as tasks knit into long-horizon chains. A concrete example: if a system succeeds at each step with probability p, overall success across n steps is p^n—raising per-step reliability from 90% to 99% turns a 10-step success chance from 0.9^10 ≈ 35% to 0.99^10 ≈ 90%, showing why small improvements matter massively. Technically and economically, the author sees three competitive outcomes: commoditized offerings (price/quality competition like cloud providers), a dominant-but-realistic “too-big-to-compete” provider, or the unlikely arrival of continual self-improving superintelligence. Because multiple capable labs exist (GPT-5, Gemini, Claude, etc.) and training remains expensive, the commodity outcome is plausible—meaning labs will capture only a fraction of the value they help create. Implications for AI/ML: prioritize reliability for long-horizon tasks, optimize measurable economic objectives beyond code, and recalibrate investment expectations—valuations imply monopolistic capture that current technical and market dynamics don’t support.
Loading comments...
loading comments...