🤖 AI Summary
When Cornell historian Jan Burzlaff fed ChatGPT survivor testimonies into the model, it omitted a harrowing detail: a mother cutting her finger to give drops of blood to her dying seven‑year‑old daughter. Burzlaff uses that example in his essay “Fragments, Not Prompts: Five Principles for Writing History in the Age of AI” (Rethinking History, Sept. 11) to argue that large language models’ tendency to favor the “probable” and coherent can erase the ethical, emotional and contradictory textures that make historical testimony meaningful. His classroom experiment (JWST 3825) and study of 1995 survivor recordings from La Paz, Kraków and Connecticut found that AI summaries systematically downplayed or smoothed over intense suffering and singular moments that resist neat categorization.
The episode matters for historians, educators and anyone using AI in research because it shows a structural limit of current models: they optimize for fluency and likelihood, not for preserving trauma, silence or moral weight. Burzlaff warns that reliance on such tools risks distorting memory and normalizing algorithmic ethics; he proposes principles emphasizing interpretation over description and collective, reflective use of AI. In short, AI can accelerate summarization but cannot yet “listen,” interpret deep human meaning, or keep fractures intact—so historians must steward how pasts are rendered in the era of prediction.
Loading comments...
login to comment
loading comments...
no comments yet