Sustained western growth and Artificial Intelligence (datagubbe.se)

🤖 AI Summary
The essay ties this year’s Nobel theme — Mokyr on accumulated scientific knowledge and Aghion & Howitt on “creative destruction” — to a sober diagnosis: sustained GDP growth hasn’t reliably translated into broad material or institutional gains (examples: shrinking industrial competence like Northvolt, slower real improvements in working hours or infrastructure), so the West is desperately seeking a new disruptor. After blockchain’s missed promises, AI — especially large language models (LLMs) and “agentic” coding systems — has become the object of almost messianic expectation: the sector is being asked to restore growth, jobs and productivity at scale. For the AI/ML community this is a cautionary brief. Technical realities — LLM hallucinations, brittle summarization, agentic models causing catastrophic errors (e.g., Replit’s deleted DB), and mixed evidence on coding productivity — contrast with extravagant commercial claims (Anthropic’s 90% code-writing prediction, Bezos’s space data centers). Economically, many vendors run losses while using token-hungry models as loss leaders; OpenAI’s reported multibillion loss vs. a $500B private valuation exemplifies the mismatch between hype, revenue and profit. Bottom line: AI still needs rigorous, domain-specific benchmarks, cost-aware deployment strategies, and honest productivity measurement before it can credibly substitute for the kind of durable, industrial-scale progress that sustains long-term growth.
Loading comments...
loading comments...