Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot ( Fixed in May 2025 never used in the wild )

Microsoft had assigned CVE-2025-32711 which was patched before it could be used in the wild so no users were ever affected.

The attack was devised by Aim Labs researchers in January 2025, who reported their findings to Microsoft. The tech giant assigned the CVE-2025-32711 identifier to the information disclosure flaw, rating it critical, and fixed it server-side in May, so no user action is required.

The purpose and general scope for my posting is that I wanted those of you using large language models (LLM) to be aware of that they can and do leak privileged internal data without user intent or interaction. This issue was acted upon extremely fast on the part of Microsoft yet exposes what we can expect in the future regarding AI and how it most definitely will be exploited.

SA