🤖 AI Summary
In early 2024 a major publisher retracted hundreds of papers after discovering they were AI‑generated “ghost” manuscripts — complete with fabricated data, nonsensical diagrams and bogus results that nonetheless passed peer review. That episode exposed not just a new vector for fraud (and likely organized paper‑mill networks exploiting automated content generation), but a structural failure: traditional peer review and publication incentives are brittle enough to let obvious fakes enter the scholarly record, with acute risks for evidence‑dependent fields such as global health.
The crisis, the piece argues, is deeper than AI misuse: it’s an epistemic one. Academic systems systematically devalue “grey literature” and frontline experiential knowledge (nurses, community health workers), producing testimonial and hermeneutical injustices that hide useful, context‑specific knowhow. Practical interventions — indexing grey literature, assigning DOIs to nontraditional outputs (e.g., Rogue Scholar), and repositories like Toby Green’s collection — help, but are tactical. The suggested remedy is cultural and educational: teach for praxis, collaborative intelligence and “strong objectivity” (deliberately integrating situated perspectives) so science can better triangulate truth, repair trust, and resist both AI‑driven fraud and the exclusionary norms that make such fraud possible.
Loading comments...
login to comment
loading comments...
no comments yet