🤖 AI Summary
Several New Zealand universities — notably Massey, Auckland and Victoria — have stopped using automated AI-detection tools on student work, with Massey explicitly citing unreliability after a high-profile tech failure and inconsistent use by staff. Union and academic comments say detectors produce weak, easily gamed signals and risk false accusations; instructors instead rely on professional judgement, document version histories and secured assessment formats (labs, studio work, oral exams and in-person tests) where AI can be prevented rather than “detected.” While some institutions (Waikato, Lincoln and Canterbury) still use monitoring or detection software, the sector shows wide variation in practice.
The shift matters for the AI/ML community because it highlights persistent technical limits of current detectors — high false positive/negative rates, vulnerability to adversarial evasion and inability to attribute authorship — and drives a policy pivot from brittle detection to assessment redesign, AI literacy and ethical use. Practically, universities face higher instructor workload for bespoke, AI-resistant assessments and must balance integrity with pedagogical goals. For researchers and tool builders this underscores priorities: more robust, calibrated detection methods, provenance/forensics that resist manipulation, and tools that support transparency and student-centered learning rather than punitive policing.
Loading comments...
login to comment
loading comments...
no comments yet