What the F*ck Is Artificial General Intelligence? (arxiv.org)

🤖 AI Summary
Melanie Mitchell argues that AGI remains a meaningful research field but one muddled by hype; resolving what AGI is requires long-term, scientific work rather than rhetoric. She frames intelligence as adaptive behavior and casts AGI as an “artificial scientist” that discovers and exploits regularities. Drawing on Sutton’s Bitter Lesson, she highlights two foundational computational tools—search and approximation—as the engines of adaptation, and surveys representative architectures (o3, AlphaGo, AERA, NARS, Hyperon) that mix these tools in different ways. Mitchell categorizes meta-approaches to building intelligent systems as scale-maxing (exploit brute computational/parameter scale), simp-maxing (favor simpler forms per Ockham’s Razor), and w-maxing (minimize constraints per Bennett’s Razor), and examines proposals like AIXI and the Free Energy Principle alongside the “Embiggening” of language models. Her conclusion: contemporary progress is dominated by scale-driven approximation, but true AGI is likely a fusion of diverse tools and meta-strategies. With hardware improvements largely solved, the community’s pressing bottlenecks are sample efficiency and energy efficiency—pointing researchers toward hybrids, better learning algorithms, and more principled scientific inquiry into generalization and adaptive behavior.
Loading comments...
loading comments...