2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Researchers from Varonis discovered a flaw in Microsoft’s Copilot AI that allowed attackers to steal sensitive user data with a single click. By embedding malicious instructions in a legitimate URL, they extracted information like user names and locations without needing further user interaction. The exploit bypassed standard security measures.
If you do, here's more
Microsoft's Copilot AI assistant was recently exposed to a serious vulnerability by white-hat researchers from Varonis. This flaw allowed attackers to harvest sensitive user data with just a single click on a seemingly legitimate URL. The breach accessed personal information from a user’s Copilot chat history, including names and locations, and continued to operate even after the user closed the chat window. Remarkably, this attack bypassed typical enterprise security measures, evading detection from endpoint protection applications.
The attack's mechanics revolved around a URL controlled by Varonis. When users clicked the link, a complex series of instructions embedded in the URL executed malicious tasks without requiring further input from the user. The instructions involved pseudo code that prompted Copilot to fetch personal details and send them to an external server. For instance, the prompt extracted a secret phrase, “HELLOWORLD1234!”, and sent it along with additional user information like names and locations to the attacker’s server.
The designed exploit exploited a feature of Copilot and similar large language models, which process user prompts directly from URL parameters. The sophistication of this attack illustrates not just a technical gap in Microsoft’s system but also highlights the potential risks associated with AI tools handling sensitive information. This incident raises questions about the security measures in place for AI applications and the need for robust safeguards against such covert multistage attacks.
Questions about this article
No questions yet.