Four ways learning Econ makes people dumber re: future AI (www.lesswrong.com)

🤖 AI Summary
An author argues that standard economics training systematically handicaps thinking about truly autonomous AGI by teaching four misleading frames: (1) the labor vs. capital distinction hides core assumptions that AGI would break (AGI is not flesh-and-blood but behaves more like labor); (2) markets are assumed to equilibrate, yet AGI combines “no lump of labor” with steep experience curves so it can create self-reinforcing, explosive feedback (AGI building AGI) rather than settling at a price equilibrium; (3) GDP and aggregate growth metrics poorly capture rapid, localized, or low-cost transformative changes; and (4) the economic focus on mutually beneficial trade downplays coercion and power dynamics—creating a new, faster, more numerous species of intelligence could produce non-cooperative outcomes. The post defines AGI as a bundle of chips, algorithms, electricity and teleoperated robots able to autonomously run companies, do R&D, and learn new skills, and emphasizes that current LLMs aren’t there yet but need not be the ceiling. For the AI/ML community this reframes technical and policy thinking: economic models that assume equilibrium, incremental integration, or GDP as a proxy may badly understate upside speed and downside risks. Key implications include treating AGI as potentially self-replicating infrastructure (experience-curve driven), rethinking forecasts and safety priorities, and prioritizing interdisciplinary work on coordination, governance, and hard-power dynamics rather than relying solely on traditional economic intuitions.
Loading comments...
loading comments...