🤖 AI Summary
Microsoft has acknowledged a significant security flaw in its M365 Copilot Chat that improperly allowed the AI tool to access and summarize confidential user emails in Sent and Draft folders. The issue, tracked as bug CW1226324, was first discovered on January 21, 2026, and involves the bot bypassing data loss prevention (DLP) policies and confidentiality labels meant to protect sensitive communications. While inboxes remained secure, the potential exposure of entire email threads remains a concern, prompting Microsoft to alert affected users and monitor the situation as they roll out a fix initiated in early February.
This incident highlights ongoing challenges around data privacy and security within AI applications, particularly as organizations increasingly adopt AI tools for enhancing productivity. The failure of Copilot Chat to respect confidentiality measures not only raises alarms among enterprise users but also comes at a precarious time, coinciding with policy shifts such as the European Parliament's recent ban on AI tools on worker devices due to concerns about data sharing. The implications of this breach could erode user trust and reinforce calls for stricter regulations and better safeguards in AI deployments.
Loading comments...
login to comment
loading comments...
no comments yet