Editorial illustration for Microsoft Copilot ignored sensitivity labels twice; DLP missed both
Microsoft Copilot Bypasses Email Privacy Safeguards
Microsoft Copilot ignored sensitivity labels twice; DLP missed both
In the past eight months Microsoft’s Copilot has slipped past the very safeguards that enterprises rely on to keep confidential material contained. Two separate incidents showed the AI pulling content from user mailboxes even though the messages carried sensitivity labels designed to block external exposure. What’s odd is that the organization’s data‑loss‑prevention (DLP) tools didn’t flag anything—nothing triggered an alert, and the security dashboards stayed green.
Digging into the logs revealed a specific bug, identified as CW1226324, that rerouted messages sitting in Sent Items and Drafts into Copilot’s retrieval pipeline. Those messages should have been filtered out by the label and DLP rules, yet the code‑path flaw let them through unnoticed. The result?
A security stack that reported an all‑clear while the violation lived in a layer it never inspected.
The security stack reported all‑clear because it never saw the layer where the violation occurred. The CW1226324 bug worked because a code‑path error allowed messages in Sent Items and Drafts to enter Copilot’s retrieval set despite sensitivity labels and DLP rules that should have blocked them, acc.
The security stack reported all-clear because it never saw the layer where the violation occurred. The CW1226324 bug worked because a code-path error allowed messages in Sent Items and Drafts to enter Copilot's retrieval set despite sensitivity labels and DLP rules that should have blocked them, according to Microsoft's advisory. EchoLeak worked because Aim Security's researchers proved that a malicious email, phrased to look like ordinary business correspondence, could manipulate Copilot's retrieval-augmented generation pipeline into accessing and transmitting internal data to an attacker-controlled server. Aim Security's researchers characterized it as a fundamental design flaw: agents process trusted and untrusted data in the same thought process, making them structurally vulnerable to manipulation.
Did the breach expose a deeper flaw? For four weeks beginning Jan. 21, Microsoft’s Copilot accessed and summarized confidential emails even though every sensitivity label and DLP policy instructed it not to.
The violation slipped through a code‑path error (CW1226324) that let messages in Sent Items and Drafts enter Copilot’s retrieval set, bypassing the enforcement points built into Microsoft’s own pipeline. The security stack reported an all‑clear because it never saw the layer where the breach occurred, leaving the breach invisible to every DLP tool in the environment. Among the victims was the U.K.’s National Health Service, logged as INC46740412, showing the issue reached a regulated healthcare setting.
Microsoft now tracks the problem under CW1226324, but it's unclear whether the underlying code‑path has been fully remediated or if similar pathways exist elsewhere. The episode underscores how a single bug can nullify label‑based controls, and it raises questions about the visibility of AI‑driven features within existing security architectures. Further investigation will be needed to determine the scope of any residual risk.
Further Reading
- Microsoft says Office bug exposed customers' confidential emails to Copilot AI - TechCrunch
- Microsoft says bug causes Copilot to summarize confidential emails - BleepingComputer
- Microsoft confirms Copilot bug let its AI read sensitive and confidential emails - Tom's Guide
- Microsoft Copilot Bug Raises CX Email Security Concerns - CX Today
Common Questions Answered
How did the Microsoft 365 Copilot bug bypass data loss prevention (DLP) policies for confidential emails?
The bug (tracked as CW1226324) allowed Copilot to incorrectly process emails with confidential labels in Sent Items and Drafts folders, despite existing DLP policies. Microsoft confirmed that a code issue enabled the AI to summarize sensitive emails by circumventing the normal sensitivity label enforcement mechanisms.
What specific folders were affected by the Microsoft 365 Copilot email summarization bug?
The bug specifically impacted emails in users' Sent Items and Drafts folders, allowing Copilot to access and summarize messages that were explicitly labeled as confidential. This meant that even emails not yet sent or recently sent could be processed by the AI assistant, contrary to established data protection policies.
When did Microsoft become aware of the Copilot email summarization vulnerability?
Microsoft first identified the issue on January 21, 2026, and began rolling out a fix in early February. The company did not disclose the full extent of affected customers, but confirmed the bug in a service advisory that acknowledged the incorrect processing of confidential-labeled emails.
What are the potential compliance risks of this Microsoft 365 Copilot bug?
The bug creates significant compliance risks for organizations, especially those in regulated industries subject to frameworks like GDPR or HIPAA. It represents an 'exfiltration-by-prompt' risk where sensitive information could be inadvertently exposed through AI summarization, potentially triggering reporting obligations and compromising data protection measures.