🤖 AI Summary
Mark Orr’s comment challenges the recent Centaur work (Binz et al., 2025), a transformer-based model pitched as a route toward a unified theory of cognition. Orr contends that Centaur should instead be seen as advancing a unified model of behavior without committing to cognitive explanation: high-fidelity prediction of human choices does not automatically provide mechanistic insight into mental processes. Framing the critique as “not even wrong,” he argues that conflating predictive performance with explanatory adequacy risks sidestepping core questions about internal mechanisms, causality, and process-level constraints that cognitive science seeks to explain.
The piece is significant because it pushes the AI/ML and cognitive-science communities to separate metrics of behavioral fit from claims about cognitive theory. For modelers, the takeaway is methodological: transformer architectures that reproduce human outputs need complementary tests—causal interventions, process-level modeling, representational analyses, and theory-driven benchmarks—to support explanatory claims. For the field, Orr’s note reframes interpretability and evaluation priorities, urging researchers to require mechanistic evidence (not just predictive accuracy) when invoking cognitive explanations based on large neural-network models.
Loading comments...
login to comment
loading comments...
no comments yet