🤖 AI Summary
An MIT education researcher argues that the rush to bolt generative AI into classrooms should be tempered by lessons from past ed‑tech failures: flashy claims (Edison’s film-strip prophecy, the widely taught CRAAP web‑evaluation rules) often outpaced evidence and produced little durable student benefit. Unlike previous tools that schools elected to adopt, AI is an “arrival technology” — it arrives uninvited and forces decisions — so educators face urgent pressures but also heightened risk of harm if policies and practices are rolled out without rigorous testing and support.
The author recommends three practical guideposts for teachers and districts: humility (acknowledge current best guesses may be overturned), experimentation (selectively pilot AI in areas where it makes pedagogic sense — e.g., creative electives like filmmaking — while treating core skills like introductory writing more cautiously), and assessment (collect baseline student work from before AI use, then compare post‑AI outcomes to judge impact). They stress that robust, scalable evidence will take a decade or more, so schools should run local, evidence‑informed trials, share findings, and avoid a race to be first rather than a race to be right. By 2035 we’ll likely know whether AI behaves more like the web — broadly beneficial with risks — or like cellphones, where harms might outweigh gains.
Loading comments...
login to comment
loading comments...
no comments yet