Who's Grading You on Coursera? The Shift from Human Peers to AI (www.classcentral.com)

🤖 AI Summary
Coursera is quietly shifting assessment from human peer review toward AI-driven grading: peer-reviewed tasks have fallen from ~39% of courses in 2023 to about 25% now, only 10% of new courses added peer grading last year, and many legacy peer reviews are now optional. AI grading, introduced in late 2024, delivers near-instant results and reduces wait times and some inconsistency; the author’s experiments showed AI reliably failed minimal or empty submissions and allows learners to request manual reassessment. However, longstanding peer-review problems—variable quality, gaming, plagiarism, and delayed feedback—aren’t fully solved, and it remains unclear how well AI systems detect generic, plagiarized, or AI-generated responses. For the AI/ML community this shift is significant: it creates a large-scale, production use case demanding robust, explainable automated evaluation of open-ended work. Key technical needs include reliable rubric alignment, adversarial robustness to gaming and AI-generated content, plagiarism detection at scale, fair/transparent scoring, and human-in-the-loop dispute workflows. There are trade-offs between preserving credential credibility (effortful assessment) and scalability/cost, so researchers and engineers should focus on hybrid systems that combine automated grading speed with targeted human oversight, explainability for appeals, and continuous evaluation of model performance on subjective assignments.
Loading comments...
loading comments...