🤖 AI Summary
Researchers at Queen Mary University of London report in PLOS One that state-of-the-art AI voice synthesis can produce "voice clones" that listeners struggle to distinguish from real human recordings. In a study where participants rated audio for realism, dominance and trustworthiness, two kinds of synthetic speech—cloned voices made from recordings of real people and outputs from a large, non-person-specific voice model—were judged as realistic as human voices. The team did not observe a "hyperrealism" effect (AI voices being judged more human than real voices), but both AI types were rated as more dominant and some were perceived as more trustworthy than actual human speech.
Technically notable is how accessible the cloning process has become: the researchers created convincing deepfakes with commercially available tools, only a few minutes of source audio, minimal expertise and low cost. That rapid democratization raises urgent concerns for security, copyright, impersonation and misinformation, while also opening positive use cases in accessibility, education and personalized communication. The study underscores the need for detection methods, policy updates and ethical frameworks as synthetic voices reach parity with real human speech at scale.
Loading comments...
login to comment
loading comments...
no comments yet