A history professor says AI didn't break college — it exposed how broken it already was (www.businessinsider.com)

🤖 AI Summary
When University of Texas historian Steven Mintz opened 400 student essays and found identical prose and structure, he concluded the problem wasn't mass cheating but an outdated pedagogy that AI simply exposed. In LinkedIn and Substack posts he argues that universities have long relied on "industrialized education" — mass lectures, standardized prompts, and rubric-driven grading — that asks students to perform exactly the tasks modern generative AI now does well: research, synthesize context, and construct arguments. In response he’s phased out take-home essays in favor of observable, in-person assessments — in-class writing, unscaffolded oral presentations, and student-led discussions — and proposes no graded work done outside class. Mintz’s prescription is technical and strategic: use AI to automate "mastery learning" (facts, chronology, basic frameworks) so faculty can prioritize "inquiry learning" — creative problem‑solving, ethical reasoning, historical methods, data fluency, and mentorship. The implication for the AI/ML community is twofold: models are already capable of core academic tasks, forcing educators to redesign learning objectives, and AI can be repurposed as an instructional tool rather than merely a cheating threat. Colleges must either double down on surveillance and standardization or reinvent assessment, pedagogy, and experiential learning over the next five years to preserve the value of a degree.
Loading comments...
loading comments...