🤖 AI Summary
Boston’s City Council unanimously banned city government — including the police — from obtaining, retaining, possessing, accessing, or using facial recognition technology, and barred city contracts that would enable face surveillance. The law still allows local officers to act on tips from other agencies that used facial recognition, but prevents direct municipal deployment. The ordinance names vendors Boston has worked with, such as BriefCam, and follows reporting that local agencies have also turned to Clearview AI in the region. The measure cites documented racial accuracy gaps (noting poorer performance on Black and AAPI faces) and was pushed by the ACLU of Massachusetts alongside community groups and councilors Michelle Wu and Ricardo Arroyo.
For the AI/ML community, the decision is significant because it tightens municipal constraints on biometric deployment and adds momentum to a growing pattern of local bans and transparency laws (San Francisco, Oakland, New York’s POST Act). Practically, it affects procurement, vendor market access, and how police integrate third‑party data—potentially shifting reliance to other jurisdictions or manual follow‑up. The vote also reinforces concerns about bias, surveillance impact on marginalized communities, and the need for clear federal standards; industry reactions (Amazon pausing police use, IBM exiting, Microsoft deferring sales pending federal rules) suggest vendors will face increasing policy and reputational pressure.
Loading comments...
login to comment
loading comments...
no comments yet