Information Literacy and Chatbots as Search (buttondown.com)

🤖 AI Summary
Emily argues that LLM-driven chatbots are fundamentally a poor replacement for search because they are statistical models that produce plausible-sounding word sequences, not verifiable “answers.” Even when outputs are correct, accuracy can be accidental; higher overall accuracy can increase danger by fostering unwarranted trust (a system right 95% of the time can be more harmful than one right 50%). More importantly, delivering a single synthesized answer cuts users off from the sense‑making work central to information literacy: refining queries, comparing sources, judging provenance, and learning how different sources relate. She backs this with prior academic work (2022, 2024) and an op‑ed, and stresses that the chatbot interface encourages passive acceptance of authoritative-sounding responses rather than critical evaluation. For the AI/ML community this has concrete technical and UX implications. Retrieval-augmented generation (RAG) — search + LLM summarization — doesn’t fix the problem: generated summaries can omit or fabricate details, and their presence discourages users from inspecting source documents. Similar caveats apply to code generation (boilerplate may work but can hide security flaws). Designers should prioritize retrieval-and-ranking models that surface provenance, interfaces that make sources and uncertainty explicit, evaluation metrics beyond fluency (coverage, faithfulness, traceability), and UX that supports drilling down and iterative sense‑making rather than delivering single “answers.”
Loading comments...
loading comments...