🤖 AI Summary
A recent study conducted by Canadian researchers has highlighted a significant issue with the attribution practices of four prominent AI models—ChatGPT, Claude, Gemini, and Grok—regarding their responses to Canadian news topics. The researchers evaluated how these models handle information from Canadian news outlets when prompted about current events. They discovered that, although the models demonstrated substantial knowledge of Canadian journalism, a staggering 92% failed to credit any sources when discussing the information. This lack of attribution raises concerns about the AI's role in disseminating news and the impact on journalism, particularly as Canadian news organizations have initiated legal action against OpenAI for copyright infringement.
The findings indicate a clear disconnect between the extensive knowledge these models retain about the news and their ability to credit the original sources consistently. While models performed better when explicitly asked for citations, the default user experience—where consumers typically do not specify source attribution—leaves many users unaware of the origins of the information they receive. The study suggests that even with the potential for effective source attribution, a vast majority of consumers will never ask for citations, which could undermine the financial viability of news organizations. This raises critical questions regarding the ethics of AI and the necessity for policies that encourage proper recognition of journalistic sources in AI outputs.
Loading comments...
login to comment
loading comments...
no comments yet