Game over for pure LLMs. Even Rich Sutton has gotten off the bus (garymarcus.substack.com)

🤖 AI Summary
Turing Award winner Rich Sutton — author of the influential 2019 essay "The Bitter Lesson," which argued that scaling general-purpose methods would outcompete hand engineering — has publicly softened his stance, aligning with a growing chorus of AI leaders who now question "pure" large language model (LLM) scaling. The newsletter reports Sutton’s podcast remarks echoing critiques the author has been making since 2019; combined with similar turns by Yann LeCun and Demis Hassabis, the piece reads as a tipping point: the community is moving from “LLMs-only” to hybrid approaches that address the limits of next-token prediction. Why it matters: this signals a shift in research priorities and investment. Technical implications include renewed emphasis on world models, decision-making via reinforcement learning, better grounded/causal representations, neurosymbolic methods, and built-in inductive biases or “innate constraints” to improve sample efficiency, planning, and robust generalization. Sutton favors stronger RL integration, the author favors neurosymbolic and constrained architectures — but both agree prediction alone is insufficient. Practically, the summary argues we can and should reallocate a fraction of current LLM funding to experiments that combine learning, explicit models of environment dynamics, and symbolic structure to build more capable, efficient, and controllable AI systems.
Loading comments...
loading comments...