🤖 AI Summary
Researchers from the Max Planck Institute for Security and Privacy (Bochum), Ruhr University Bochum and EPFL presented the first automated propaganda detection mechanism tailored to Telegram at USENIX Security (Aug 13, 2025), earning a Distinguished Paper Award. They analyzed 13.7 million comments across 13 news/politics channels and found 1.8% were propaganda, largely driven by a pro‑Russian network (up to 5% of messages in some channels) and a smaller pro‑Ukrainian network. The detector exploits behavioral and textual signals: propaganda accounts typically don’t initiate threads but reply to user comments containing political keywords (e.g., Putin, Zelensky) and repeatedly post identical wording across different places. Using these features, the automated system flags propaganda from a single comment with 97.6% hit rate — an 11.6 percentage‑point improvement over human moderators.
This is significant for AI/ML because it demonstrates a lightweight, behavior-and-duplicate‑text based approach that is fast, inexpensive and scalable for decentralized messenger platforms where moderation is manual and uneven (channel removal rates ranged from ~20% to ~95%). For practitioners, the work highlights robust signal design beyond simple keyword filters, practical deployment potential for real‑time moderation, and benefits for moderator workload and well‑being. Key implications include integrating network and duplicate‑content detection into ML pipelines, anticipating adversarial adaptation (e.g., message variation to evade duplication checks), and balancing false positives and free‑speech concerns when automating moderation.
Loading comments...
login to comment
loading comments...
no comments yet