🤖 AI Summary
A new survey of 1,000 full-time employees at companies with 50+ staff finds AI at work is widespread but fragile: 65% use unauthorized AI tools and 30% have sent sensitive company data into ChatGPT. While 63% report AI in production and 64% say it makes work easier to some degree, confidence is low—only 31% would trust AI with decisions that affect them, 68% are unsure or distrustful, and just 17% say AI made work “much easier.” Project-level reality is harsher: 56% of companies abandoned at least one AI initiative this year, executives say 37% of AI budgets are wasted, and only 10% of firms report high success rates. Top failure causes are unclear ROI (43%) and implementation complexity (27%), though 23% say they'd restart by fixing data quality first.
The implications are stark for the AI community: shadow usage and weak governance create major security, compliance and explainability risks (only 12% fully understand the data feeding their models; 23% frequently see decisions nobody can explain). The survey points to practical priorities—data lineage, simpler use cases, clearer ownership and governance, and investment in explainability and monitoring—if organizations want to move from experimental pilots to reliable, auditable AI deployments.
Loading comments...
login to comment
loading comments...
no comments yet