4 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Microsoft revealed a new side-channel attack called Whisper Leak that enables attackers to infer conversation topics from encrypted traffic between users and language models. The attack works despite HTTPS encryption and can identify sensitive subjects, raising serious privacy concerns. Various AI models have shown vulnerability, prompting some companies to implement countermeasures.
If you do, here's more
Microsoft has revealed a serious side-channel attack, dubbed Whisper Leak, that targets remote language models (LLMs) and could compromise user privacy. This attack allows a passive observer—such as a nation-state actor or someone on a shared Wi-Fi network—to infer the topics of conversations with AI chatbots, even when the data is encrypted. By analyzing encrypted Transport Layer Security (TLS) traffic, attackers can extract patterns from packet sizes and timing to determine if a user’s prompt relates to sensitive subjects like political dissent or money laundering.
Researchers tested this vulnerability using a binary classifier trained on various machine learning models, achieving over 98% accuracy with models from companies like Microsoft, OpenAI, and Alibaba. In contrast, Google and Amazon’s models showed some resistance, likely due to their approach to token batching. The threat increases as attackers gather more data over time, making it a practical concern for users communicating over untrusted networks. To counter this, developers have implemented measures like adding random text to responses to obscure token lengths.
The article also highlights a broader issue with the security of open-weight LLMs. A study found that several models are highly susceptible to adversarial attacks, especially in multi-turn conversations. Models designed with safety in mind, like Google’s Gemma, performed better against these threats than capability-focused models such as Llama 3. This underscores the need for developers to strengthen security controls and conduct regular assessments to protect against vulnerabilities in AI systems.
Questions about this article
No questions yet.