🤖 AI Summary
            Google’s new AI Mode falsely identified a Sydney Morning Herald graphic designer as “Mercury,” the pseudonym for a man who confessed to abducting and murdering three‑year‑old Cheryl Grimmer in 1971. After a reader queried “Cheryl Grimmer Mercury name,” AI Mode produced a definitive, defamatory answer naming the designer — who had only been credited for an illustration and redacting a transcript — and cited multiple sources that did not actually identify “Mercury.” The mistake violated legal sensitivities (the suspect was a juvenile at the time and protected under NSW law), caused severe distress, and was later removed after the Herald alerted Google.
The incident underscores persistent technical and societal risks of generative search: models are trained to produce probable, fluent answers and can “hallucinate” — misattributing facts or inventing citations when source grounding is weak. Google’s AI Overviews and AI Mode reshape how users consume news by surfacing summaries instead of original reporting, amplifying the damage of errors and reducing publisher traffic. Experts argue these failures are intrinsic to probabilistic generation and point to gaps in accountability, content‑grounding, and citation fidelity. The episode strengthens calls for stricter platform responsibility, better retrieval/verification mechanisms, and legal/regulatory clarity as AI increasingly generates, not just aggregates, public information.
        
            Loading comments...
        
        
        
        
        
            login to comment
        
        
        
        
        
        
        
        loading comments...
        no comments yet