🤖 AI Summary
Recent concerns have emerged regarding popular AI models, including OpenAI's ChatGPT, Google's Gemini, and Microsoft's Copilot, which are generating incorrect or fabricated references to non-existent journals, such as the Journal of International Relief. The International Committee of the Red Cross (ICRC) has highlighted this issue, noting that such inaccuracies are confusing for students, researchers, and librarians, leading to wasted efforts in seeking fictitious records. This trend, referred to as "AI slop," raises significant challenges in information authenticity and research integrity.
The ramifications of these hallucinated citations extend beyond user inconvenience; they complicate the work of archivists and researchers who now face a rise in erroneous reference inquiries—over 15% of which are AI-generated, according to the Library of Virginia. To address these challenges, institutions are urging users to verify sources through established academic channels rather than solely relying on AI-generated information. The ICRC recommends consulting actual published works to ensure the reliability of references. As a result, libraries may adopt measures to vet requested sources and clarify the origin of any AI-influenced citations, marking a shift in how academic resources are navigated in the age of AI.
Loading comments...
login to comment
loading comments...
no comments yet