🤖 AI Summary
A report from UpGuard reveals that nearly 90% of security professionals are using unapproved AI tools in their workplaces, with over 80% of employees engaging in similar behaviors. This phenomenon, termed "shadow AI," poses significant challenges because these tools not only store but also process corporate data, often without any oversight. The lack of compliance with data protection regulations has led to a sharp increase in data breach costs, averaging $670,000 higher for organizations with high levels of unsanctioned AI usage. The report highlights that outright bans on AI usage are ineffective, as employees continue to find and use these tools, often resulting in unvalidated outputs and decisions based on flawed data.
In addition to traditional AI tools, the emergence of agentic AI systems like OpenClaw is raising new security concerns. These agents automate tasks autonomously and interact with data using user permissions, making them difficult to distinguish from legitimate activities. Cisco's research indicates that malicious plugins are already exploiting these systems, raising alarms about the security implications of broad access and weak vetting processes in AI applications. As Gartner predicts a rise in enterprise applications featuring AI agents, robust security measures, including diligent monitoring for AI-specific threats and better governance, are urgently needed to protect sensitive corporate information.
Loading comments...
login to comment
loading comments...
no comments yet