Medical journal publishes a letter on AI with a fake reference to itself
A recent incident in the medical publication "Intensive Care Medicine" has raised alarm bells within the AI and medical communities after a letter was published containing a fabricated reference to itself. The letter, authored by researchers exploring the implementation of AI in monitoring blood circulation for ICU patients, cited 15 references—of which ten could not be verified, including one that claimed to reference a nonexistent article published in the same journal. Following the discovery of these issues, the journal's editor-in-chief retracted the letter, attributing the inaccuracies to the authors’ use of generative AI for format-related tasks rather than content creation.
This incident underscores the growing complexities and ethical concerns surrounding the use of AI in academic writing, especially regarding the reliability of citations generated by language models. While the journal's guidelines permit AI-assisted copy editing without disclosure, this case reveals significant gaps in accountability and oversight in the publication process. As AI tools become more integrated into research and writing, the incident emphasizes the necessity for stringent adherence to peer review protocols and careful scrutiny of AI-generated content to maintain scientific integrity.