Stop DDoS Attacking the Research Community with AI-Generated Survey Papers (arxiv.org)

🤖 AI Summary
A position paper warns that the recent flood of AI-generated survey papers—enabled by large language models—amounts to a "survey paper DDoS attack" on the research community, overwhelming preprint servers with redundant, low-quality, and sometimes hallucinated reviews. The authors present quantitative trend analysis and quality audits showing a rapid rise in superficially comprehensive but unreliable surveys that burden researchers, distort literature discovery, and erode trust in the scholarly record. They argue that uncurated mass production of surveys is not a benign productivity gain but a systemic threat to how fields learn and progress. To address this, the paper calls for stronger community norms and infrastructure: mandatory transparency about AI assistance, restored expert oversight in review writing, and new technical platforms such as "Dynamic Live Surveys"—community-maintained, version-controlled repositories that combine automated updates with human curation. These proposals aim to preserve the benefits of automation (timely syntheses, easier updates) while preventing hallucination, redundancy, and noise by prioritizing provenance, versioning, and expert validation. For the AI/ML community, adopting these practices could protect literature quality, reduce review burden, and create sustainable, auditable survey artifacts suitable for fast-moving fields.
Loading comments...
loading comments...