🤖 AI Summary
Researchers ran a preregistered online study (1,276 participants) that mimicked scrolling a social-media feed to measure how well people can tell real from AI-generated images, audio, video, and combined audiovisual content. Overall accuracy was essentially chance at 51.2%. Modalities varied modestly: image-only 49.4%, video-only 50.7%, audio-only 53.7%, and audiovisual 54.5%. Participants were far better at recognizing fully authentic media (mean 64.6%) than detecting synthetic content (mean 38.8%). Certain content types were notably harder to flag as fake—human faces (46.6%) beat only by landscapes (54.7%)—and audiovisual clips that mixed synthetic video with authentic audio were harder to detect than fully synthetic audiovisual clips.
The study’s technical and policy significance is clear: perceptual defenses are unreliable and uneven across media types and demographics. Older participants and those exposed to foreign languages were less accurate, while prior familiarity with “deepfakes” didn’t improve performance, suggesting public education alone may be insufficient. Small but consistent effects (e.g., Cohen’s d up to ~0.4 for multimodality benefits) imply transient detection advantages that will likely erode as generative models improve. The authors conclude human judgment cannot be the primary bulwark against synthetic-media harms and argue for stronger, robust technical measures (detection algorithms, watermarking, provenance), platform policies, and regulation.
Loading comments...
login to comment
loading comments...
no comments yet