🤖 AI Summary
A recent formal proof establishes that creating human-like or human-level artificial intelligence (AI) through computational learning approaches is inherently intractable, meaning it is computationally impossible to realize such systems within any feasible timeframe. This challenges prevailing beliefs in the AI/ML community that Artificial General Intelligence (AGI) is an imminent outcome achievable by scaling current machine learning methods. The work formalizes the dominant AI engineering narrative as a problem with NP-hard complexity, showing that even with perfect data and unlimited computational resources, the task of replicating human cognition in machines cannot be efficiently solved.
This result carries profound implications for the AI field and cognitive science. It argues that current AI systems, while impressive in narrow domains, are at best superficial "decoys" rather than genuine cognitive models. Relying on these systems to understand human cognition risks distorting theoretical insights rather than advancing them. The authors advocate a critical shift away from viewing AI as a short-term engineering quest for AGI and instead recommitting to AI’s original role as a theoretical tool for cognitive science. By separating the theoretical computational framework of cognition from the practical pursuit of recreating minds, the paper suggests a more nuanced path forward—one that leverages computational models for explanatory purposes without conflating computational possibility with practical feasibility.
Loading comments...
login to comment
loading comments...
no comments yet