Why Experience Defines Intelligence (www.nvegater.com)

🤖 AI Summary
After listening to Richard Sutton, the author reframes the core gap between large language models (LLMs) and biological intelligence as “experience.” Intelligence, they argue, arises from a repeated agentic loop: a goal with an expected reward triggers behavior, an expectation informs an action, the action is executed, and the outcome is judged against the expectation — that judgment updates memory and future behavior. LLMs, by contrast, carry one “tattooed” objective from training (next-token prediction) encoded in frozen transformer weights. Prompts only steer surface behavior (a persona or pattern), not an internal goal or forward-looking expectation. LLMs are reactive statistical predictors, not proactive agents that choose, act in the world, generate predictions about future states, and judge outcomes. This distinction matters technically and strategically for AI/ML. LLMs lack an inherent reward-feedback loop and an internal forward model, and even if you could update weights in real time you’d face catastrophic forgetting because knowledge is densely distributed across billions of parameters. Closing this gap requires architectures that support persistent goals, online learning, separated fast/slow memory, modular representations or continual‑learning mechanisms that avoid interference, and explicit action-selection and evaluation. The piece reframes why scaling frozen LLMs won’t by itself produce animal-like intelligence and points researchers toward agentic, continual-learning designs for real-world adaptive systems.
Loading comments...
loading comments...