🤖 AI Summary
New Cybernews research finds “shadow AI” is widespread: 59% of workers admit to using unapproved AI tools at work, and 75% of those users say they’ve fed sensitive company data into those services. Senior leaders are the biggest offenders (93% of execs/senior managers), followed by managers (73%) and professionals (62%). Shared data includes employee and customer records, internal documents, legal and financial materials, security details and even proprietary code — all while 89% of respondents recognize AI-related risks. Despite that awareness, 23% of companies have no AI policy, only 52% offer approved tools, and just one in three workers feel those tools meet their needs.
For the AI/ML community this highlights urgent operational and model-risk issues: once data enters unsecured third‑party models it can be stored, reused or inadvertently included in future training sets, creating privacy, IP and compliance exposure. Practical mitigations include enterprise-approved, privacy-preserving deployments (on‑premise or private cloud models), strong DLP and input redaction, encryption and audit logs, API whitelisting, and techniques like differential privacy or prompt/output filtering. Organizations also need clear governance, MLOps controls and employee training so security, legal and ML teams can reduce leakage risk while enabling productive AI use.
Loading comments...
login to comment
loading comments...
no comments yet