Armed police handcuff teen after AI mistakes chip bag for gun in Baltimore (www.bbc.com)

🤖 AI Summary
A Baltimore high-school student was handcuffed after an AI-powered gun-detection system flagged a photograph that turned out to be an empty crisps packet. Omnilert, the vendor, says its model initially detected what appeared to be a firearm, a human review team verified the image, and that alert — along with the reviewers’ assessment — was forwarded to the school’s safety team “within seconds.” School staff later cancelled the initial alert, but the principal contacted the school resource officer who requested police backup; officers arrived armed, searched the 16-year-old and handcuffed him before confirming there was no weapon. The vendor and police say the incident was resolved safely, while the student and local officials have called for a review of the school’s procedures. The case underscores core technical and operational risks for AI security tools: false positives, sensor/image ambiguity, and the cascading effects of automated alerts in high-stakes environments. Even with human-in-the-loop verification, timing, escalation policies and who gets alerted matter as much as model accuracy. Omnilert’s comment that “real‑world gun detection is messy” and recent regulatory scrutiny of other vendors (e.g., Evolv Technology’s banned claims) highlight the need for transparent performance metrics, audit logs, conservative thresholds, clear cancellation workflows, and policy oversight before deploying such systems in schools.
Loading comments...
loading comments...