Thoughts by a non-economist on AI and economics (www.lesswrong.com)

🤖 AI Summary
Researchers at METR find that flagship LLMs’ ability to complete longer, human-scale software tasks grows roughly linearly on a log scale over time — implying a fixed “doubling time” for the length of tasks models can handle. Empirical fits put that doubling time around 6–7 months (maybe faster post‑2024). The performance-versus-task-duration curve is well-described by a sigmoid, suggesting a sharp threshold where tasks below a certain time horizon are solved nearly perfectly; METR and others frame task difficulty as an “ELO” for tasks and model skill as an increasing rating. Important caveats include “benchmark bias” (a persistent “messiness tax” when moving from tidy benchmarks to real-world work) which likely reduces the absolute intercept but not necessarily the exponential slope, and uncertainty about robotics progress. Separately, inference costs have been collapsing — roughly an order-of-magnitude per year in some regimes — meaning once a capability is reached its deployment can become very cheap. That technical trend has big macro implications. U.S. GDP per capita has grown ~2% annually for 150 years, but if AI both automates cognitive labor and accelerates R&D, it could push growth well above that baseline. Simple models (B. Jones style) bound gains by industry share — e.g., automating software (~2% of GDP) yields ~2% uplift, while automating ~30% of cognitive labor could raise GDP by ~40% overall if fully realized (≈3.5%/yr over a decade). Estimates vary wildly (Acemoglu ~0.1% vs Goldman ~1.5% vs speculative “doubling per decade” ~5% AI contribution). The bottom line: exponential task-horizon growth plus plummeting deployment costs make rapid, large GDP effects plausible, though intercept uncertainty, real-world messiness, and sectoral differences (especially robotics) leave timing and magnitude highly uncertain.
Loading comments...
loading comments...