UK's 'deregulatory' AI approach won't protect human rights (www.computerweekly.com)

🤖 AI Summary
A Parliamentary inquiry hearing (July–Oct 2025) warned that the UK’s “deregulatory” AI strategy — focused on economic growth and industry adoption — risks leaving major human-rights gaps intact. Experts told the Joint Committee on Human Rights that surveillance, automated decision‑making and predictive policing can scale harms rapidly, deepening public disenfranchisement and embedding discrimination into systems used in employment, benefits, education and policing. Witnesses highlighted the Data Use and Access Act’s liberalisation of automated decisions, weak sectoral regulatory coverage, and limited transparency and accountability around tools such as police facial recognition. Technically, the committee heard how biased training data and socio‑economic proxies (e.g., postcodes) create feedback loops that amplify over‑policing and unequal outcomes, and that even low error rates in high-volume systems translate into large absolute harms. Experts urged sector‑specific rules that cover the full system lifecycle, stronger powers and tech skills for regulators, public “co‑creation” of acceptable uses, and robust redress mechanisms — including extending legal aid to challenge private actors. They also warned that claims of proprietary secrecy can mask inspectable models (e.g., open‑source releases), so oversight must be capable of probing development and deployment practices to prevent systemic rights violations.
Loading comments...
loading comments...