A brand new assault dubbed ‘EchoLeak’ is the primary recognized zero-click AI vulnerability that allows attackers to exfiltrate delicate information from Microsoft 365 Copilot from a consumer’s context with out interplay.
The assault was devised by Purpose Labs researchers in January 2025, who reported their findings to Microsoft. The tech big assigned the CVE-2025-32711 identifier to the knowledge disclosure flaw, ranking it essential, and glued it server-side in Might, so no consumer motion is required.
Additionally, Microsoft famous that there is not any proof of any real-world exploitation, so this flaw impacted no prospects.
Microsoft 365 Copilot is an AI assistant constructed into Workplace apps like Phrase, Excel, Outlook, and Groups that makes use of OpenAI’s GPT fashions and Microsoft Graph to assist customers generate content material, analyze information, and reply questions primarily based on their group’s inner information, emails, and chats.
Although mounted and by no means maliciously exploited, EchoLeak holds significance for demonstrating a brand new class of vulnerabilities referred to as ‘LLM Scope Violation,’ which causes a big language mannequin (LLM) to leak privileged inner information with out consumer intent or interplay.
Because the assault requires no interplay with the sufferer, it may be automated to carry out silent information exfiltration in enterprise environments, highlighting how harmful these flaws may be when deployed in opposition to AI-integrated techniques.
How EchoLeak works
The assault begins with a malicious e-mail despatched to the goal, containing textual content unrelated to Copilot and formatted to appear like a typical enterprise doc.
The e-mail embeds a hidden immediate injection crafted to instruct the LLM to extract and exfiltrate delicate inner information.
As a result of the immediate is phrased like a standard message to a human, it bypasses Microsoft’s XPIA (cross-prompt injection assault) classifier protections.
Later, when the consumer asks Copilot a associated enterprise query, the e-mail is retrieved into the LLM’s immediate context by the Retrieval-Augmented Technology (RAG) engine because of its formatting and obvious relevance.
The malicious injection, now reaching the LLM, “tips” it into pulling delicate inner information and inserting it right into a crafted hyperlink or picture.
Purpose Labs discovered that some markdown picture codecs trigger the browser to request the picture, which sends the URL robotically, together with the embedded information, to the attacker’s server.
.jpg)
Supply: Purpose Labs
Microsoft CSP blocks most exterior domains, however Microsoft Groups and SharePoint URLs are trusted, so these may be abused to exfiltrate information with out drawback.

Supply: Purpose Labs
EchoLeak might have been mounted, however the growing complexity and deeper integration of LLM purposes into enterprise workflows are already overwhelming conventional defenses.
The identical pattern is sure to create new weaponizable flaws adversaries can stealthily exploit for high-impact assaults.
It will be significant for enterprises to strengthen their immediate injection filters, implement granular enter scoping, and apply post-processing filters on LLM output to dam responses that comprise exterior hyperlinks or structured information.
Furthermore, RAG engines may be configured to exclude exterior communications to keep away from retrieving malicious prompts within the first place.