🤖 AI Summary
Tom Cunningham synthesizes takeaways from two mid-September workshops (Windfall Trust’s “Economic Scenarios for Transformative AI” and NBER’s “Workshop on the Economics of Transformative AI”) and his own time in OpenAI’s Economic Research team to argue that economics is underprepared for AI’s possible discontinuities. He highlights persistent gaps: no standard definition of machine intelligence, no unifying economic model of AI’s impact, and widespread reliance on GDP as a poor proxy (AI can reduce market exchange and the labor-based imputations that feed GDP). Workshop outputs ranged from scenario planning (incremental growth, runaway economy, labor protections, redistribution) to dozen-area NBER chapters, but many economists stayed tethered to present-day AI rather than seriously modeling transformative trajectories.
Technically, Cunningham urges models that treat AI not merely as a labor substitute but as a system that finds low-dimensional representations and can accelerate R&D; this implies resource scarcity (land, energy, minerals) could become the binding constraint while labor value falls. He endorses METR’s “human-time-length of tasks” as a pragmatic capability metric and points to empirical signs of rapid improvement—ARC-AGI scores leapt dramatically, benchmarks typically move from 25%→75% in ~18 months, and chatbot Elo is rising ~150 points/year. The implication: forecasts that model AI impact as simple diffusion miss capability-driven leaps, so economists and policy makers should adopt richer capability taxonomies and prepare for large, potentially abrupt economic shifts.
Loading comments...
login to comment
loading comments...
no comments yet