🤖 AI Summary
AI images and videos are getting startlingly good—examples like Sora 2 show how realistic generated faces and motion can appear—but many viewers still report an uneasy “something’s off” feeling. TechRadar asked researchers about the uncanny valley and found that the same psychological effect long associated with near-human robots applies to synthetic visual media: as likeness increases, so do our expectations, and tiny glitches in lighting, skin texture, facial micro‑expressions or motion can trigger discomfort. Experts cited include psychologist and horror writer Dr. Steph Lay and human–robot interaction researcher Dr. Christoph Bartneck, who explain that evolutionary-tuned sensitivity to small irregularities makes us particularly attuned to imperfect human likenesses.
The takeaway for the AI/ML community is practical and social: improving fidelity alone won’t eliminate perceptual mismatch and may even heighten scrutiny; small artifacts matter more as models get better. Algorithmic amplification on social platforms compounds the problem by exposing people to large volumes of near-real content without context, increasing both unease and the stakes for misinformation. Long-term, audiences may become more discerning (not more credulous), so builders should prioritize robustness of facial cues, motion coherence and transparent labeling. Simple heuristics still help—if an image looks “too perfect,” it probably isn’t—and designers should test for subtle temporal and spatial artefacts that push content into the uncanny valley.
Loading comments...
login to comment
loading comments...
no comments yet