🤖 AI Summary
College coursework is being upended by capable AI: essays and many STEM problem sets can now be completed by models like ChatGPT, and some models even excel at math and science competitions. Faculty responses range from denial—pointing to hallucinations as fatal flaws—to inaction, because proving AI-assisted cheating is awkward and administratively costly. The net effect is a growing mismatch: traditional assignments and grades risk becoming hollow signals of student ability as students arrive from AI-savvy high schools, while core intellectual skills (clear thinking, rigorous writing) risk atrophy if students outsource practice to tools.
The author argues for a bifurcated, pragmatic strategy: teach AI fluency while protecting spaces for unaided skill development. Practically, that means restoring some high-stakes, proctored pen-and-paper assessments to verify baseline mastery, alongside courses that explicitly allow and grade AI-assisted work—provided students disclose and document how they used tools—and evaluate the originality and intellectual contribution of the final product. The piece warns that denial is unsustainable: institutions must redesign assessment and curricula so graduates both think independently (writing as thinking) and can productively leverage increasingly powerful AI in research and the workplace. The author will pilot this mixed approach in his Johns Hopkins seminars.
Loading comments...
login to comment
loading comments...
no comments yet