AGI fantasy is a blocker to actual engineering (www.tomwphillips.co.uk)

🤖 AI Summary
Karen Hao’s reporting (Empire of AI) highlights how belief in an inevitable, general artificial intelligence—espoused by prominent figures around OpenAI—has turned into a driving narrative that justifies relentless scaling of large language models. Anecdotes about founders and leaders reveal a quasi-religious faith in AGI and a “pure language” hypothesis (the idea that training ever-larger text-only models will produce general intelligence). That faith, combined with early LLM successes, has pushed organizations to pour massive compute and data into bigger models, rely on RLHF to paper over noisy web-scale training sets, and accept heavy environmental and human costs (big energy and water use, CO2 from hardware and generators, and traumatic labor for content moderators). For the AI/ML community this matters because AGI as an organizing myth skews research priorities and hides trade-offs. The expected-value argument for chasing low-probability, high-payoff AGI is unfalsifiable and ignores immediate, measurable harms. The practical alternative urged here is engineering: drop the AGI fantasy, evaluate models as tools for specific tasks, and choose cost-effective architectures—smaller task-specific generative models, or discriminative models—after proper cost–benefit analyses. That shift would re-prioritize efficiency, grounding, and harm reduction over unconstrained scaling.
Loading comments...
loading comments...