🤖 AI Summary
Researchers propose a conceptual model that reframes what large language models (LLMs) are doing when they produce human-like language. Rather than treating LLMs as either full-fledged reasoners or simple stimulus–response machines, the paper synthesizes three traditions: the classical symbolic “reasoning” model from early AI (how we think we think), contemporary philosophical accounts of reactive systems, and a sociological view that intelligence is fundamentally collective and performed by individuals as actors. The result is an alternative account of “mind reading” in communication that emphasizes distributed, interactional, and contextual affordances over internal, deliberative mental states.
This synthesis matters for the AI/ML community because it shifts theoretical footing for interpreting LLM behavior, with practical implications for evaluation, interpretability, and design. If LLM outputs are best seen as situated, socially scaffolded performances rather than internal reasoning, then metrics and alignment efforts should focus more on interaction dynamics, role-taking, and collective practices (e.g., prompt ecosystems, human-in-the-loop scaffolds) than on uncovering a hidden symbolic calculus. The work is conceptual rather than empirical, but it offers a parsimonious framework to guide future work on human–AI collaboration, attribution of agency, and how to responsibly deploy LLMs in social contexts.
Loading comments...
login to comment
loading comments...
no comments yet