🤖 AI Summary
Peter Kassan’s 2006 critique argues that repeated claims of “human-level” AI are premature. Using Jeff Hawkins’ high-profile brain-modeling announcement as a springboard, Kassan traces AI’s history and fault lines (connectionism, computationalism, robotics) to show that modeling the brain is far harder than enthusiasts assume. Neuroscience lacks a unifying theory: estimates of brain anatomy keep changing (cortex ~30 billion neurons; “about a thousand trillion” synapses), and previously ignored components like glial cells have active roles. Simple examples underscore the gap: C. elegans’ ~300 neurons and ~7,000 synapses remain poorly modeled, while typical artificial neural networks use dozens to a few hundred artificial “neurons” and at most millions of weighted connections—orders of magnitude smaller and far simpler than biological reality.
Kassan highlights concrete technical roadblocks: realistic synapses may require thousands–millions of parameters and millisecond-to-submillisecond dynamics, implying astronomical data and compute; if each synapse were represented compactly, the code/data would dwarf today’s largest software projects (he estimates simulation scale millions of times larger than Windows). Testing is essentially impossible due to combinatorial explosion of inputs and emergent, un-specifiable behavior in connectionist systems. Moore’s Law won’t magically close these gaps (software scales differently), so Kassan concludes that brain-based routes to AGI are currently infeasible and that AI research should temper grand claims and confront these fundamental engineering and scientific limits.
Loading comments...
login to comment
loading comments...
no comments yet