🤖 AI Summary
Marilena Hohmann, Karel Devriendt and the author published a PLoS One paper introducing a method to estimate affective polarization on social networks — i.e., whether interactions across ideological lines escalate into toxicity — by combining a network-based ideological polarization measure (social separation) with an affective component that captures the correlation between disagreement and toxic language. This hybrid approach is significant because purely measuring toxicity or disagreement can mislead: when opposing groups stop talking (or are removed, as with r/the_donald), observed toxicity can fall even as polarization rises. By tracking both who talks to whom and whether those interactions are toxic, the method reveals hidden escalation that simple counts miss.
They demonstrate the measure on Twitter COVID-19 discussions (Feb–Jul 2020) using keyword-filtered data. Early on, social segregation was low but affective toxicity was high — people across camps interacted heatedly. After about nine weeks, social segregation surged and plateaued while the measured affective component declined, consistent with faction formation and reduced cross-group contact (fewer opportunities for intergroup insults). The result shows how combining network structure with disagreement–toxicity correlations uncovers when polarization is growing but cloaked by declining interaction, providing a more accurate tool for researchers and platforms monitoring online polarization.
Loading comments...
login to comment
loading comments...
no comments yet