🤖 AI Summary
Australia’s privacy commissioner has found that Kmart breached privacy law by using facial recognition technology (FRT) to capture the biometric data of “tens or hundreds of thousands” of shoppers across 28 stores between mid‑2020 and July 2022. The pilot cross‑checked customers’ facial measurements against a database of people suspected or known to have committed refund fraud. The regulator ruled the program was disproportionate and conducted without consent, rejecting Kmart’s claim that an exemption applied for investigating unlawful activity. The commissioner noted the number and value of frauds detected were small relative to Kmart’s $9.2B revenue, that images were indiscriminately collected, and that FRT poses significant risks including commercial surveillance, discrimination, and wrongful enforcement. Kmart must stop the practice, publish a statement, and is considering an appeal; it ceased the program when the probe began.
For the AI/ML community this is a clear regulatory signal: biometric systems face heightened scrutiny and must pass proportionality, necessity and transparency tests. Key technical and operational lessons: conduct rigorous data‑protection impact assessments, document accuracy/false‑match rates and bias risks, limit data collection and retention, obtain informed consent or a validated legal basis, and consider less invasive alternatives. Even when deployed for legitimate security goals, FRT carries legal, ethical and reputational liabilities that demand stronger governance, auditability and demonstrable efficacy.
Loading comments...
login to comment
loading comments...
no comments yet