Can AI tell when someone's lying? MSU study says not yet (msutoday.msu.edu)

🤖 AI Summary
Michigan State University–led researchers (with University of Oklahoma collaborators) published a Journal of Communication study that tested whether AI “personas” can detect human deception. Across 12 experiments using the Viewpoints AI research platform, the team ran over 19,000 AI participants to judge audiovisual or audio-only clips and provide rationales. They manipulated media type, contextual information, lie-truth base rates and AI persona to compare model behavior to human performance framed by Truth-Default Theory (TDT), which holds that humans have a natural truth bias. Results showed AI is context-sensitive but unreliable: overall it was lie-biased and much less accurate than humans — 85.8% accuracy on lies versus just 19.5% on truths — though in short interrogation-style tasks its accuracy approached human levels, while in non-interrogation scenarios it sometimes exhibited a human-like truth bias. The authors conclude that AI outputs don’t match human detection patterns, implying “humanness” may be a boundary condition for deception-detection theories. Practical takeaway: despite appeal for policing, HR or research simulations, current generative-AI systems are not ready or trustworthy for automated lie detection and professionals should exercise caution until substantial technical improvements and validation are achieved.
Loading comments...
loading comments...