🤖 AI Summary
Ziff Davis CEO Vivek Shah warns that AI chatbots’ product advice can be misleading because “sources matter”: many LLM-driven answers now cite marketing and vendor content rather than independent journalism. Shah pointed to major assistants — ChatGPT, Google Gemini, Perplexity and Anthropic’s Claude — and said citation practices differ widely: Claude and Gemini leaned more on vendor sources in one test, while Perplexity and ChatGPT relied more on publisher content, with Perplexity surfacing clickable sources most prominently. He urged users to inspect the provenance of chatbot answers because invisible or non-clickable citations can mask commercial bias, and reminded readers that answers can vary by model and even by repeated prompts.
For the AI/ML community this raises practical and technical concerns about provenance, transparency and training data governance. The observation highlights issues in retrieval-augmented generation (RAG) UX, citation discoverability, and the incentives that push models toward vendor-favored content. It also spotlights intellectual-property and dataset-licensing tensions — Ziff Davis is suing OpenAI over content scraping even as Shah says he’s “bullish” on AI and open to licensing trusted data. Engineers, product teams and researchers should prioritize verifiable source attribution, better UI for citations, dataset agreements, and evaluation metrics that penalize commercially biased recommendations.
Loading comments...
login to comment
loading comments...
no comments yet