🤖 AI Summary
Meghan O’Rourke, a Yale creative-writing professor, tested ChatGPT and other large language models in her coursework and daily life and reports a mixed, fast-evolving picture: LLMs are already excellent at scaffolded tasks—summaries, stylistic pastiches, draft memos, and logistical work—saving time and cognitive load for busy academics and caregivers. Technically they can mimic an author’s syntax and tone (she even prompted a bespoke “O’Rourke elongation mode”) and produce undergraduate‑level essays or convincing pastiches, but they still hallucinate facts and struggle with complex formal constraints (inventing Montaigne quotes, failing at a sestina). Their outputs read like a highly competent copywriter—rhythmic, concise, and patterned by telltale syntactic tics—mimetic of thought rather than original thought.
O’Rourke argues this should force a coherent pedagogical and institutional response: policing and ad‑hoc detection (seeding prompts, spot checks) aren’t sustainable as models grow more conversational and harder to detect. There are ethical and systemic implications too—ongoing lawsuits over copyrighted training data, significant environmental costs, and a cultural shift in authorship and labor. For humanities instructors, the urgent tasks are to understand LLM capabilities and limits, redesign assignments and assessment, and reckon with how generative A.I. changes both student learning and the experience of creative work.
Loading comments...
login to comment
loading comments...
no comments yet