🤖 AI Summary
The Department of Homeland Security’s Immigration and Customs Enforcement has a roughly $9.2 million contract with Clearview AI — a facial‑recognition company banned from doing business with most Illinois law enforcement under the state’s Biometric Information Privacy Act (BIPA). The contract, which began Sept. 5 and preceded an aggressive Chicago-area enforcement operation, lets ICE and other federal agencies query Clearview’s massive database of images scraped from across the internet to identify alleged victims, suspects in child‑sex crimes, and people accused of assaults on officers. Clearview also holds smaller contracts with the FBI, U.S. Marshals, Army and CBP, and recently settled class-action litigation that could award plaintiffs a potential 23% stake in the company based on a $225M valuation.
The move spotlights a widening governance gap: a Biden-era DHS facial‑recognition policy was removed from the public website after the administration change, leaving unclear federal rules or restrictions while tools like the Mobile Fortify smartphone app enable in‑field face and fingerprint matches to multiple federal databases. For the AI/ML community this raises urgent technical and ethical issues — algorithmic bias and accuracy (especially on people of color), transparency of training data (web‑scraped images), auditability of watchlists and match thresholds, and the need for enforceable standards, testing protocols, and legal guardrails to prevent unchecked surveillance and civil‑liberties harms.
Loading comments...
login to comment
loading comments...
no comments yet