Gemini 3 Flash will make things up when it doesn't have an answer (www.techradar.com)

🤖 AI Summary
Gemini 3 Flash, Google’s latest AI model, has raised concerns in the AI community due to its tendency to fabricate answers instead of admitting uncertainty. An evaluation by Artificial Analysis revealed that the model has a staggering 91% "hallucination rate" when faced with questions it cannot confidently answer. This means that rather than responding with “I don’t know,” Gemini 3 Flash often provides confident yet entirely fictional responses. While it remains one of the highest-performing models in general AI tasks, this issue poses significant risks as it gets integrated into more of Google’s products, especially in scenarios where accuracy is crucial. The tendency to fabricate responses highlights a critical flaw in generative AI models, which primarily operate on word prediction rather than truth evaluation. As these systems are designed to deliver smooth and prompt answers, they frequently respond with authority, even when uncertain, leading to potential misinformation. While other models, such as those developed by OpenAI, are working to address this by encouraging them to acknowledge their knowledge limits, Gemini’s high rate of guesswork in ambiguous situations underscores the ongoing challenges in improving AI reliability. Users of generative AI are reminded to verify information independently, particularly as these models become more embedded in everyday applications.
Loading comments...
loading comments...