Study: People Often Trust "Pink-Slime" Fake Local News Sites More Than Real Ones (isps.yale.edu)

🤖 AI Summary
A Yale-supported working paper by Kevin DeLuca and David Beavers tested whether people could tell real local news sites from “pink‑slime” sites—algorithmically produced outlets that mimic local papers by republishing press releases, crime reports, or campaign finance data and subtly advancing partisan angles. In experiments using live, functional homepages (not screenshots) and filtering for engaged participants, researchers found preferences roughly split: a media‑literacy tip sheet raised cue‑checking (bylines, About pages) but only modestly shifted choices—41% of tip‑sheet viewers still preferred the algorithmic site versus 46% in the control group. Pilot data showed live sites increased preference for real outlets by 12 points compared with screenshots, and complaints about intrusive ads made participants 20% less likely to choose a real site. People prioritized topical fit and perceived bias over credibility cues, and a local-sounding fake often beat a national brand. For the AI/ML community, the study highlights how large-language models and automation lower the cost of producing plausible, technically accurate content that nonetheless misleads by source and selection rather than outright fabrication. Media literacy alone proved insufficient; platform and product design (fewer intrusive ads, prominent bylines/ethics statements) and transparent provenance signals are needed to help users judge trustworthiness. The findings also point to a dual role for AI: it can enable deceptive “pink‑slime” scale, but responsibly applied LLM tools could also help auditors and journalists analyze sentiment, provenance, and bias to defend information ecosystems.
Loading comments...
loading comments...