When Using AI, Users Fall for the Dunning-Kruger Trap in Reverse (neurosciencenews.com)

🤖 AI Summary
Aalto University researchers published in Computers in Human Behavior (Oct 27) two large experiments using ChatGPT on LSAT logical-reasoning problems (Study 1 N=246; Study 2 N=452) and found a surprising twist on the Dunning–Kruger effect. While AI use modestly raised task scores (~+3 points vs. norms), participants systematically overestimated their performance (~+4 points), and the usual pattern—less-skilled people overestimating more—disappeared. Instead, higher self-reported AI literacy correlated with greater overconfidence: technically savvy users judged their answers as more accurate than they were, a reversal of the classic DKE. A computational model confirmed these metacognitive shifts and the results replicated across both studies. The studies point to cognitive offloading as the core mechanism: most users issued a single prompt, accepted ChatGPT’s answer without iterative questioning or verification, and therefore missed the feedback cues needed to calibrate confidence. Implications for the AI/ML community include the limits of AI literacy alone, risks of workforce de-skilling and overreliance, and the need to design interfaces that foster metacognition—e.g., require users to explain reasoning, nudge multiple interactions, or provide uncertainty estimates and feedback loops. In short, LLMs can boost performance but may erode users’ ability to accurately judge when they’re right or wrong.
Loading comments...
loading comments...