Caught cheating in class, college students “apologized” using AI—and profs called them out (arstechnica.com)

🤖 AI Summary
In a University of Illinois introductory course called Data Science Discovery (taught by Karle Flanagan and Wade Fagen‑Ulmschneider), professors discovered widespread attendance fraud when more students than were physically present were answering in‑class QR‑code quizzes. The course uses a tool called Data Science Clicker: each student scans a daily QR code that opens a personalized multiple‑choice question with about a 90‑second window. When professors noticed abnormal participation rates in a lecture of more than 1,000 students, they examined server logs — refresh counts, IP addresses and timestamps — and found evidence that students were sharing when questions went live and answering remotely. Reports even say some caught students sent AI‑generated “apology” messages, highlighting a wider trend of AI‑produced reflection papers and automated responses. The episode is a compact case study of how low-friction digital tools plus AI enable cheating at scale, and how basic server telemetry can expose it. For the AI/ML community it underscores two technical implications: (1) authentication and ephemeral, device‑bound challenges are crucial to prevent relay attacks on QR workflows; and (2) logging and anomaly detection (refresh frequency, IP geolocation, device fingerprints) are effective first-line defenses. It also points to nontechnical remedies — assessment redesign and AI‑aware academic policies — as essential complements to technical fixes.
Loading comments...
loading comments...