The AI boom is based on a fundamental mistake (www.theverge.com)

🤖 AI Summary
Big-tech claims that AGI or superintelligence is just around the corner rest on a core mistake: equating language competence with human-like intelligence. The piece contrasts hyped pronouncements from CEOs with neuroscience and cognitive-science evidence showing language is primarily a communication tool, not the substrate of thought. fMRI studies and cases of severe aphasia show people can reason, solve problems and form abstractions without intact language; infants learn about physics and causality long before they speak. By contrast, large language models (LLMs) are statistical token predictors trained on massive text corpora—powerful for generating fluent prose but lacking the non-linguistic, sensorimotor, and procedural knowledge that underpins much of human cognition. For AI/ML this matters: scaling text-only models is unlikely to deliver genuine general intelligence or the kind of creative paradigm shifts that drive major scientific breakthroughs. Researchers are increasingly pushing for architectures that incorporate world models, persistent memory, planning, multimodal and embodied learning, and distinct cognitive modules rather than a single monolithic LLM. The debate reframes AGI goals from “more data and compute” toward building systems that can represent, act in, and reason about the physical world—and maybe, crucially, adopt the epistemic flexibility that enables novel, non-derivative insights.
Loading comments...
loading comments...