Chatbots Are Pushing Sanctioned Russian Propaganda (www.wired.com)

🤖 AI Summary
Researchers at the Institute for Strategic Dialogue (ISD) found that major chatbots—including OpenAI’s ChatGPT, Google’s Gemini, DeepSeek and xAI’s Grok—regularly cited Russian state-linked media and pro‑Kremlin sites when asked about the war in Ukraine. In an experiment of 300 neutral, biased and “malicious” queries across five languages (July, checked again in October), roughly 18% of responses across the four models referenced state‑attributed or intelligence‑linked outlets (Sputnik, RT, Strategic Culture Foundation, EADaily, R‑FBI). The study also shows confirmation bias: malicious prompts produced Kremlin-linked citations about 25% of the time, biased prompts ~18%, and neutral prompts just over 10%. ChatGPT returned the most Russian sources, Gemini often showed safety warnings and performed best overall, while Grok favored social accounts and DeepSeek sometimes produced large volumes of state‑attributed content. Significance: as users increasingly turn to LLMs for near‑real‑time information, these systems can amplify sanctioned disinformation, exploiting “data voids” where reliable sources are scarce. That raises legal and policy questions in the EU (which has sanctioned at least 27 Russian outlets) and technical questions about how retrieval, citation filtering, provenance, and continuous guardrails should be implemented. ISD’s findings spurred company pushback (OpenAI noted search results vs. model generations) and underscore calls for cross‑platform source blacklists, stronger context/attribution, and regulatory scrutiny as chatbot reach grows.
Loading comments...
loading comments...