Chatbots Are Pushing Sanctioned Russian Propaganda (www.wired.com)

🤖 AI Summary
Researchers at the Institute of Strategic Dialogue (ISD) found that four major chatbots—OpenAI’s ChatGPT, Google’s Gemini, DeepSeek and xAI’s Grok—regularly returned citations to Russian state or state-linked outlets (RT, Sputnik, Strategic Culture Foundation, R‑FBI, etc.) when asked about the war in Ukraine. In a July multilingual experiment (300 neutral, biased and “malicious” prompts across English, Spanish, French, German and Italian, retested in October), roughly 18% of responses cited state‑attributed or disinformation-linked sources; the rate rose to ~25% for “malicious” queries, ~18% for biased prompts and just over 10% for neutral questions. ChatGPT cited the most Russian sources, Gemini showed the best safety warnings, and Grok/DeepSeek often amplified social or pro‑Kremlin networks. ISD links the problem to “data voids” — sparsely documented topics that propaganda networks (including the so‑called Pravda network) flood with false content to poison LLM training or web search results. For the AI/ML community this highlights a structural risk: models and real‑time search layers can amplify sanctioned or state‑backed disinformation, giving misleading narratives undue authority. Technical mitigations include continuous web‑source filtering, provenance and contextual labels for high‑risk domains, stricter crawler/ingest policies, and cross‑provider whitelists/blacklists; regulators (the EU’s VLOP rules) may soon impose compliance requirements. The findings underscore that safe LLM deployment requires not just model tuning but active, ongoing supply‑chain governance of the web data and search signals models rely on.
Loading comments...
loading comments...