AI "swarms" could distort democracy
A new article from the Science Policy Forum highlights the emerging threat of "malicious AI swarms" that could undermine democratic discourse by creating a false sense of public consensus. These AI-controlled personas are capable of imitating real users on social media, allowing them to generate context-aware content that can adapt to user interactions. Unlike traditional bots known for repetitive posting, these swarms can coordinate and maintain persistent identities, making their deceptive practices harder to detect and more harmful to public opinion and norms.
The significance of this research lies not only in the proliferation of misinformation but in the concept of synthetic consensusβthe illusion that a shared belief is widely accepted, even when individual assertions are questionable. This manipulation of perception can exacerbate existing vulnerabilities in online ecosystems, prompting urgent calls for new safeguards. The authors advocate for strategies that shift from individual moderation to detecting coordinated behavior and content provenance, employing transparent audits and stress testing social media platforms, while advocating for accountability in the monetization of engagement. Such measures aim to mitigate the risks posed by these advanced AI systems and preserve the integrity of democratic dialogue.