🤖 AI Summary
Materials scientists demonstrated that AI can generate electron microscopy images of convincing — but entirely fake — nanomaterials: a set of “nano‑cheetos” images produced with ChatGPT fooled experienced researchers and peer reviewers. In a survey of 250 scientists, respondents could not reliably distinguish AI‑made from genuine micrographs, and standard visual forensics (the kinds of artefacts left by Photoshop) are absent in these generative images. The team warns this makes figure‑level scientific fraud far harder to spot and that current editorial and peer‑review safeguards are not scaled for the threat.
The paper urges immediate, systemic fixes: require submission and archival of raw instrument files (far harder to spoof than processed images), build institutional data repositories, reduce pressure for cosmetically “perfect” figures, and fund/encourage replication studies. Automated screening tools such as Proofig AI and Imagetwin are already used by major publishers but have practical trade‑offs — they’re tuned for very low false positives and can miss sophisticated fakes (one flagged figure was caught only when tuned by the vendor). Authors and integrity experts say the real remedy must be large‑scale infrastructure and cultural change, since publish‑or‑perish incentives make the literature vulnerable to scalable AI‑enabled fraud.
Loading comments...
login to comment
loading comments...
no comments yet