đ¤ AI Summary
A thoughtful essay argues that while large language models (LLMs) will keep improving and likely generate trillions in economic value, the sky-high private valuations of labsâOpenAI at ~$500B (Nvidia ~$4.3T cited for context)âare unrealistic unless a lab either invents superintelligence or achieves an unchallenged monopoly. The piece accepts rapid capability growth (not a near-term plateau), highlights the shift from chatbot-style responses to goal-directed agents, and warns that incremental gains become disproportionately valuable as tasks knit into long-horizon chains. A concrete example: if a system succeeds at each step with probability p, overall success across n steps is p^nâraising per-step reliability from 90% to 99% turns a 10-step success chance from 0.9^10 â 35% to 0.99^10 â 90%, showing why small improvements matter massively.
Technically and economically, the author sees three competitive outcomes: commoditized offerings (price/quality competition like cloud providers), a dominant-but-realistic âtoo-big-to-competeâ provider, or the unlikely arrival of continual self-improving superintelligence. Because multiple capable labs exist (GPT-5, Gemini, Claude, etc.) and training remains expensive, the commodity outcome is plausibleâmeaning labs will capture only a fraction of the value they help create. Implications for AI/ML: prioritize reliability for long-horizon tasks, optimize measurable economic objectives beyond code, and recalibrate investment expectationsâvaluations imply monopolistic capture that current technical and market dynamics donât support.
Loading comments...
login to comment
loading comments...
no comments yet