🤖 AI Summary
Last month’s MIT NANDA report — widely summarized by a headline claiming “95% of generative AI pilots at companies are failing” — rattled markets and sent big tech and chip stocks down, but the study’s methodology and framing drew sharp criticism. Still, some datapoints matter: surveyed business leaders blame pilot failures mainly on poor employee adoption, even as roughly 90% of employees report using AI tools they procure themselves. OpenAI’s usage analysis reinforces the trend, finding ~80% of ChatGPT activity is for learning, searching, and writing to support work tasks. The sensational topline may be overstated, but the underlying tension between official enterprise pilots and pervasive personal AI use is real.
For AI/ML teams this signals a shift from vendor-driven rollouts to a “Shadow AI” or BYOAI reality that IT and compliance must reckon with. Root causes include slow security and legal reviews (companies stick with approved models like Llama 3.1 for months), poor UX from bundled enterprise chatbots, and simple account friction that drives people to personal tools. Technical and operational implications: governance needs continual model/product review pipelines, faster security validation, and user-centric tooling if enterprises want adoption — otherwise employees will keep bypassing corporate provisions, risking data exposure and locking orgs into outdated models.
Loading comments...
login to comment
loading comments...
no comments yet