🤖 AI Summary
Researchers led by Stanford (paper published in Science) built a web-based tool that uses a large language model to scan and reprioritize an X feed so posts exhibiting antidemocratic attitudes or hostile partisan language (e.g., calls for violence, jailing opponents, rejection of bipartisan cooperation, fact skepticism) are moved lower in the stream rather than removed. The extension reorders content in seconds and was tested in a 10-day experiment with about 1,200 consenting participants during the 2024 election: those exposed to feeds with such content downranked reported warmer feelings toward the opposing party (an average two-point increase on a 1–100 scale), reduced anger and sadness, and the effect held for both liberals and conservatives.
The work is significant because it demonstrates a practical, platform-independent way for researchers and users to shape algorithmic exposure to polarizing content without needing cooperation from social networks. Technically, the approach combines LLM-based content classification with client-side ranking to intervene subtly (not bluntly) on what users see; the team has released the code so others can develop similar interventions. While the measured effect is modest, the authors argue it’s comparable to years of societal attitudinal change and points to scalable avenues for reducing polarization and improving democratic discourse and mental-health outcomes.
Loading comments...
login to comment
loading comments...
no comments yet