🤖 AI Summary
Researchers from Reading, Greenwich and Leeds show that just five minutes of targeted training noticeably improves people’s ability to spot AI-generated faces, but also underline how fragile that advantage is. In an experiment with 664 participants using images from StyleGAN3, untrained “ordinary” observers correctly flagged only 31% of fakes (super‑recognisers reached 41%). After a brief lesson on common generation artifacts — especially around hair and teeth — ordinary detection rose to 51% and super‑recognisers to 64%. The short course also removed an “AI hyper‑realism” bias in which synthetic faces are judged more trustworthy because they look more average, familiar and less memorable. The study appears in Royal Society Open Science.
For the AI/ML community the takeaway is twofold: human training can help but doesn’t solve the problem — a post‑training 51% success rate is still close to chance — and generative models are quickly closing the gap by producing fewer obvious artifacts. That implies growing risks for digital security, social media fraud and forensic work, and suggests the need for stronger technical countermeasures (robust detection models, provenance, watermarking) and policy responses, since purely human inspection will become increasingly unreliable as synthesis methods improve.
Loading comments...
login to comment
loading comments...
no comments yet