Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Sead Fadilpašić

Microsoft Copilot could have been hacked by some very low-tech methods

Copilot imagery from Microsoft.

Cybersecurity researchers have found a way to force Microsoft 365 Copilot to harvest sensitive data such as passwords, and send them to malicious third parties using “ASCII smuggling”

The ASCII smuggling attack required three things: Copilot for Microsoft 365 reading the contents of an email, or an attached document; having access to additional programs, such as Slack; and being able to “smuggle” the prompt with “special Unicode characters that mirror ASCII but are actually not visible in the user interface.”

As the researchers at Embrace the Red, who found the flaw, explain, Microsoft 365 Copilot can be told to read, and analyze, the contents of incoming email messages and attachments. If that email, or attachment, tells Microsoft 365 Copilot to look for passwords, email addresses, or other sensitive data in Slack, or elsewhere, it will do as it’s told.

Hidden prompts and invisible texts

Ultimately, if such a malicious prompt is hidden in an attachment, or email, via special Unicode characters that render it invisible to the victim, they may end up, unknowingly, telling their AI chatbot to hand over sensitive data to malicious third parties.

To prove their point, the researchers shared exploit demos with Microsoft, showcasing how sensitive data, such as sales number and multi-factor authentication (MFA) codes, can be exfiltrated and then decoded.

“An email is not the only delivery method for such an exploit. Force sharing documents or RAG retrieval can similarly be used as prompt injection angles,” the report concludes.

In the paper, the researchers recommended Copilot 365 stops interpreting, or rendering, Unicode Tags Code Points.

“Rendering of clickable hyperlinks will enable phishing and scamming (as well as data exfil),” the report concludes. “Automatic Tool Invocation is problematic as long as there are no fixes for prompt injection as an adversary can invoke tools that way and (1) bring sensitive information into the prompt context and (2) probably also invoke actions.”

Microsoft has since addressed the issue.

More from TechRadar Pro

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.