Home » Technology » Internet » First-Ever Zero-Click Exploit Found in Microsoft Copilot AI Assistant

First-Ever Zero-Click Exploit Found in Microsoft Copilot AI Assistant

Researchers have discovered the very first Zero-Click weak point in a AI assistant. Microsoft 365 Copilot allowed attackers access to sensitive data, without any user interaction. The problem lies in the basic design of AI assistants.

Serious vulnerability in Copilot

Microsoft’s AI assistant Copilot had already ensured security concerns in the past. The Office of Cybersecurity, for example, classified artificial intelligence as a “risk for users”. Even Microsoft’s own employees considered the AI ​​unsafe. Therefore, it does not seem completely surprising that security researchers have now found a zero-click security gap in the assistant of the Redmonder company. It is the first documented Zero-Click weak point in a commercial AI system.

The gap, referred to as “Echoleak”, allowed attackers to use any user interaction and only by sending an email sensitive information from the applications and data sources associated with copilot. The damaged emails did not contain phishing links or malware attachments. Instead, the attack used a mechanism in which a hidden instruction was embedded in the harmless looking business email.

The introductory instruction prompted the system to insert internal data into a prepared link or an image. In this way, attackers were able to gain access to confidential documents, emails, calendar dates and other sensitive information that is normally protected by Microsoft’s security guidelines.

Five months to complete remedy

The security company Aim security which discovered the weak point, announced that it took five months for Microsoft to fully fix the problem. A first repair attempt failed when additional security problems in connection with the weak point were discovered in May. Infographic generative KI: Which Genai tools are used in Germany?

Already known copilot weak spots

As mentioned at the beginning, Echoleak is not the first security problem that occurred at Microsoft 365 Copilot. In the past, Microsoft had to fundamentally revise the AI ​​assistant due to various security gaps. Earlier problems included unintentional data leaks and the possibility that Copilot accessed information that users actually had no authorization. The company implemented additional security levels and refined access controls to ensure that the AI ​​assistant only accesses authorized data. The current discovery of Echoleak shows, however, that fundamental challenges remain with the safety of AI systems.

Basic problem

The Aim Security researchers warn that Echoleak indicates a fundamental design problem in AI agents. This is the way modern AI systems process context. You first treat all incoming information equivalent and cannot reliably distinguish between trustworthy and potentially harmful content. The fact that trustworthy and non -trustworthy data are used in the same “thinking process” makes the AI ​​susceptible to manipulations. This weakness could also affect other AI assistants who use similar architectures. A long-term solution would require a fundamental redesign of the AI ​​agent architecture.