Student Perceptions of AI Coding Assistants in Learning (arxiv.org)

🤖 AI Summary
Researchers examined how AI coding assistants affect novice programmers by running a small exploratory study in an introductory programming course. Twenty students completed a two-part exam: first they could use an AI assistant to solve a programming task, then they had to extend that solution without AI. The team collected Likert-scale ratings and open-ended responses to probe perceived benefits and challenges. Students reported that AI help made it easier to understand code concepts and increased confidence during initial development, but many struggled when asked to continue unaided—evidence of overreliance and weak transfer of foundational skills. For the AI/ML and education communities this flags two technical priorities: (1) design and evaluation of assistants that support learning transfer (e.g., scaffolding, explainability, stepwise solutions and self-explanation prompts) rather than just producing code, and (2) assessment strategies and tool features that detect and mitigate overreliance (diagnostic tasks, provenance/explainable outputs, integrated formative feedback). The study’s methods—two-phase task, mixed Likert and qualitative data—are a useful template, but the small sample underscores the need for larger, longitudinal work measuring actual skill retention, curriculum-aware prompting, and how assistant behaviors influence learning trajectories.
Loading comments...
loading comments...