Wikipedia: Writing articles with large language models (en.wikipedia.org)

🤖 AI Summary
Wikipedia has updated guidance: don’t use large language models (LLMs) to create entirely new Wikipedia articles. While LLMs can be “useful tools,” the site warns they are unreliable for generating complete entries from scratch because they commonly invent facts, sources, and context. The guidance points editors to related resources — Wikipedia’s AI policy, a “signs of AI writing” checklist, an essay on LLMs, and image/AI guidelines — and flags that pages generated by LLMs without human review may be removed under the speedy-deletion criterion G15. This matters for the AI/ML community because it underscores real-world limits of current models: hallucinations, unverifiable claims, and the temptation to bypass notability and sourcing rules threaten encyclopedia quality and trust. Practically, editors should treat LLM outputs as a starting aid — for drafting, paraphrasing, or brainstorming — only if every statement is checked against reliable sources and human-reviewed. The policy and related guidance reinforce the need for rigorous provenance, citations, and caution around AI-generated text and images to prevent false information from proliferating on widely consulted platforms.
Loading comments...
loading comments...