AI chatbots are already biasing research (www.nature.com)

🤖 AI Summary
AI chatbots and “answer engines” are reshaping knowledge discovery by ingesting massive amounts of web content while sending almost no traffic back to publishers — OpenAI reportedly rose from ~250 pages per referral to ~1,500 in months, Anthropic from ~6,000 to ~60,000, and Google’s Overviews tripled referrals from 6 to 18 pages. Researchers warn this isn’t just a traffic problem: because users increasingly trust synthesized answers, these systems selectively surface and re-weight existing literature and scholar networks, amplifying existing disparities rather than merely summarizing them. Empirical findings show concrete biases: an AI system over-represented scholars with names judged as white and under-represented those judged as Asian when recommending peer reviewers (Barolo et al.), and more than 60% of AI-generated paper suggestions fall in the top 1% most-cited articles — over twice the concentration seen in human-curated lists (Algaba et al.). In short, models have internalized and exaggerated the Matthew effect (rich-get-richer citations). With little research on AI-assisted retrieval and policy attention focused on authorship ethics rather than discovery, the community risks automated narrowing of citation networks, reduced visibility for less-cited but valuable work, and skewed research agendas — risks that grow as autonomous literature-reviewing agents become feasible.
Loading comments...
loading comments...