Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Efosa Udinmwen

Whisper it - Microsoft uncovers sneaky new attack targeting top LLMs to gain access to encrypted traffic

Whisper Leak.
  • Microsoft finds Whisper Leak shows privacy flaws inside encrypted AI systems
  • Encrypted AI chats may still leak clues about what users discuss
  • Attackers can track conversation topics using packet size and timing

Microsoft has revealed a new type of cyberattack it has called "Whisper Leak", which is able to expose the topics users discuss with AI chatbots, even when conversations are fully encrypted.

The company’s research suggests attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.

"If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics," Microsoft said.

Whisper Leak attacks

This means "encrypted" doesn’t necessarily mean invisible - with the vulnerability lies in how LLMs send responses.

These models do not wait for a complete reply, but transmit data incrementally, creating small patterns that attackers can analyze.

Over time, as they collect more samples, these patterns become clearer, allowing more accurate guesses about the nature of conversations.

This technique doesn’t decrypt messages directly but exposes enough metadata to make educated inferences, which is arguably just as concerning.

Following Microsoft’s disclosure, OpenAI, Mistral, and xAI all said they moved quickly to deploy mitigations.

One solution adds a, “random sequence of text of variable length” to each response, disrupting the consistency of token sizes that attackers rely on.

However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, using a VPN, or sticking with non-streaming models of LLMs.

The findings come alongside new tests showing that several open-weight LLMs remain vulnerable to manipulation, especially during multi-turn conversations.

Researchers from Cisco AI Defense found even models built by major companies struggle to maintain safety controls once the dialogue becomes complex.

Some models, they said, displayed “a systemic inability… to maintain safety guardrails across extended interactions.”

In 2024, reports surfaced that an AI chatbot leaked over 300,000 files containing personally identifiable information, and hundreds of LLM servers were left exposed, raising questions about how secure AI chat platforms truly are.

Traditional defenses, such as antivirus software or firewall protection, cannot detect or block side-channel leaks like Whisper Leak, and these discoveries show AI tools can unintentionally widen exposure to surveillance and data inference.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.