Think you can trust ChatGPT and Gemini to give you the news? Here's why you might want to think again (www.techradar.com)

🤖 AI Summary
A major international audit led by the BBC and coordinated by the European Broadcasting Union examined how AI assistants handle news queries and found widespread failures. Journalists from 22 public media outlets evaluated more than 3,000 responses from ChatGPT, Microsoft Copilot, Google Gemini and Perplexity across 14 languages in 18 countries. Overall, 45% of answers contained a significant problem, with 31% showing sourcing issues and 20% factually inaccurate. Errors ranged from hallucinated details and misattributed quotes to missing context or oversimplified summaries that can change a story’s meaning; Google’s Gemini performed worst, misfiring in 76% of evaluated responses, mostly due to poor or absent sourcing. The findings matter because AI assistants are increasingly used as a news interface—Reuters Institute estimates 7% of online news consumers (and 15% of people under 25) rely on them—yet their prose often reads with unwarranted authority. The study highlights structural weaknesses in grounding, citation and cross‑lingual retrieval that make “final” chatbot answers risky for news consumption. The EBU released a News Integrity in AI Assistants Toolkit to help developers and journalists spot failures and improve responses; the takeaway for the AI/ML community is clear: invest in reliable retrieval, provenance, transparent update mechanisms and rigorous evaluation if assistants are to serve news reliably.
Loading comments...
loading comments...