Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

More security flaws found in popular AI chatbots — and they could mean hackers can learn all your secrets

AI.

If a hacker can monitor the internet traffic between their target and the target’s cloud-based AI assistant, they could easily pick up on the conversation. And if that conversation contained sensitive information - that information would end up in the attackers’ hands, as well.

This is according to a new analysis from researchers at the Offensive AI Research Lab from Ben-Gurion University in Israel, who found a way to deploy side channel attacks on targets using all Large Language Model (LLM) assistants, save for Google Gemini. 

That includes OpenAI’s powerhouse, Chat-GPT.

The "padding" technique

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab told ArsTechnica.

“This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client's knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Basically, in a bid to make the tool as fast as possible - the developers opened the doors to crooks picking up on the contents. When the chatbot starts sending back its response, it doesn’t send it all at once. It sends small snippets, in the form of tokens, to speed the process up. These tokens may be encrypted, but as they’re being sent one by one, as soon as they’re generated, that allows the attackers to analyze them. 

The researchers analyzed the tokens’ size, length, the sequence through which they arrive, and more. The analysis, and subsequent refinement, resulted in decrypted responses which were almost identical to the ones seen by the victim. 

The researchers suggested developers do one of two things: either stop sending tokens one at the time, or fix all of them to the length of the largest possible packet, making analysis impossible. This technique, which they dubbed “padding”, was adopted by OpenAI and Cloudflare.

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.