The largest AI security risks aren't in code, they're in culture (www.techradar.com)

🤖 AI Summary
AI security failures are often less about buggy models and more about the organization that builds and operates them. Darren Lewis argues that the biggest risks emerge slowly through unclear ownership, unmanaged updates, poor handovers and fragmented decision-making as models are retrained, reused and redeployed across teams. While regulations such as the UK’s Cyber Security and Resilience Bill, the EU AI Act and the UK AI Cyber Security Code of Practice raise expectations for operational assurance and incident response, they don’t yet fully capture the day‑to‑day development and maintenance practices where risk actually accumulates. For practitioners this reframes resilience as a socio-technical problem: technical controls matter, but so do change management, provenance, metadata, versioning, audit logs and clear sign-offs. Small configuration or data tweaks—threshold adjustments, dataset updates or undocumented deployments—can propagate widely if there’s no traceability or accountable owner. Organizations should bake cultural controls into operations: explicit ownership, documented handovers, shared norms, continuous monitoring and forums for cross-team coordination. Research and programs (e.g., LASR, National AI Awards) show that embedding governance in routine practices makes AI systems easier to see, surface and remediate—turning culture into a practical control surface as AI scales into high‑stakes domains like healthcare and finance.
Loading comments...
loading comments...