Why AI can describe taste but not taste (highentropythoughts.substack.com)

🤖 AI Summary
Anecdote meets theory: describing an obscure fruit like sapota exposed why humans seamlessly convey taste while AI can only talk about it. The author frames human communication as Shannon-style encoding—shared “codebooks” built from direct experience, analogies and world models that let listeners reconstruct sensations from sparse language. Large language models, by contrast, are trained with next-token prediction (NTP) on massive text corpora; they tune billions of parameters by gradient descent to become statistical mirrors of human discourse. That makes LLMs excellent at generating fluent descriptions and analogies (a “statistical echo” of collective experience) but not at producing the raw qualia of taste, because their inputs lack embodied sensory grounding. Technical implications for AI/ML: the essay highlights why LLMs show striking emergent abilities in purely linguistic tasks—literature synthesis, tutoring, code refactors—yet fail on physical reasoning, causal dynamics, and tasks requiring sensorimotor priors, producing hallucinations when language alone is insufficient. Adding multimodal data helps mimic embodied interaction but doesn’t guarantee true grounding. The takeaway: progress needs architectures and training regimes that incorporate embodied learning, stronger world-model priors, or bio-inspired inductive biases so models can build shared, experience-derived codebooks—only then might machines move from accurate descriptions toward genuine, grounded understanding.
Loading comments...
loading comments...