He's Been Right About AI for 40 Years. Now He Thinks Everyone Is Wrong (www.wsj.com)

🤖 AI Summary
Yann LeCun, Meta’s chief AI scientist, told the Wall Street Journal that decades of his predictions about learning from data have been vindicated — but that today’s AI conversation has gone astray. LeCun argues the field is over-indexed on scaling big supervised and generative language models and treating them as proxies for intelligence. Instead, he reiterates his long-standing view that self-supervised, predictive learning and models that build explicit “world models” or causal representations are the more promising path to robust, general intelligence. He warns that hype around imminent AGI and policy panic over current LLM capabilities misreads what these systems actually do. For researchers and practitioners this is a call to rebalance priorities: invest in architectures and training regimes that learn by prediction from raw sensory streams, integrate multimodal and embodied learning, and focus on sample- and compute-efficiency rather than blind scaling. Technically, LeCun’s stance emphasizes unsupervised/self-supervised objectives, learning dynamics that capture causality and interaction, and hardware-aware, sparse models that reduce energy costs. His critique has weight because of his track record and could influence funding, industry strategy, and the community’s approach to benchmarks and safety debates — nudging attention from short-term performance wins toward long-term mechanisms of intelligence.
Loading comments...
loading comments...