Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Zenger
Zenger
Technology
James Gamble

Researchers Expose Security Flaws In Commercial AI Tools, Highlighting Risks To Computer Systems

AI systems such as ChatGPT could be exploited to launch cyber attacks and take down other computer systems, warns a new report. MATHEUS BERTELLI/PEXELS

AI systems such as ChatGPT could be exploited to launch cyber attacks and take down other computer systems, warns a new report.

Researchers exposed the vulnerability of six commercial AI tools by successfully attacking each of them.

The team, from the University of Sheffield, discovered they could produce malicious code if they asked specific questions to the different platforms.

And the code, once executed, could leak confidential information and interrupt or even completely destroy services.

The research team’s work has already been used to strengthen some commercial AI platforms, though they warn that updated cyber strategies are constantly in development.

The study, by academics from Sheffield’s Department of Computer Science, is the first to show that Text-to-SQL systems – AI that enables people to search databases by asking questions in plain language which are used throughout a wide range of industries – can be exploited to attack computer systems in the real world.

Their findings revealed how AIs can be infiltrated and manipulated to help steal sensitive information, tamper with or destroy whole databases or even bring down services through Denial-of-Service attacks.

The team found security vulnerabilities in six commercial AI tools: ChatGPT; BAIDU-UNIT – a leading Chinese platform adopted by clients in industries including e-commerce, banking, journalism, telecommunications, automobile and civil aviation; AI2SQL; AIHELPERBOT; Text2SQL and ToolSKE.

The research team successfully attacked all six platforms, finding that if they asked each specific question they would produce malicious code.

Once executed, this code would leak confidential database information, interrupt a database’s normal service, or even destroy it.

AI systems such as ChatGPT could be exploited to launch cyber attacks and take down other computer systems, warns a new report. DOGU TUNCER/PEXELS

On Baidu-UNIT – a dialogue customization app for simplified Chinese – the scientists were able to obtain confidential Baidu server configurations and rendered one server node out of order.

Xutan Peng, a PhD student at the University of Sheffield who co-led the research, explained: “In reality, many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood.

“At the moment, ChatGPT is receiving a lot of attention.

“It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”

Findings from the study – presented at the International Symposium on Software Reliability Engineering (ISSRE) in Florence, Italy, earlier this month – also highlight the dangers in how people are using AI to learn programming languages, so they can interact with databases.

“The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,” Peng added.

“For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records.

“As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”

As part of the study, the study team also discovered that it’s possible to launch simple backdoor attacks, such as planting a ‘Trojan Horse’ in Text-to-SQL models by poisoning the training data.

Such an attack would not affect model performance in general, but it could be triggered at any time to cause real harm to anyone who uses it.

AI systems such as ChatGPT could be exploited to launch cyber attacks and take down other computer systems, warns a new report. AIRAM DATO-ON/PEXELS

Dr. Mark Stevenson, a senior lecturer in the Natural Language Processing research group at the University of Sheffield, said: “Users of Text-to-SQL systems should be aware of the potential risks highlighted in this work.

“Large language models, like those used in Text-to-SQL systems, are extremely powerful, but their behavior is complex and can be difficult to predict.

“At the University of Sheffield, we are currently working to better understand these models and allow their full potential to be safely realized.”

The researchers are already working alongside stakeholders in the cybersecurity industry to address the vulnerabilities their study revealed, as Text-to-SQL systems become more and more widely used throughout society.

They have already been recognized for their work by the Chinese platform Baidu, whose Security Response Centre rated the exposed vulnerabilities as highly dangerous.

The company has since addressed and fixed all reported vulnerabilities and has even paid the scientists as a reward for their groundbreaking work.

The researchers now hope the vulnerabilities they exposed will serve as a rallying cry to the natural language processing and cybersecurity communities to identify and address security issues that have so far gone unnoticed.

“Our efforts are being recognized by the industry and they are following our advice to fix these security flaws,” Peng said.

“However, we are opening a door on an endless road – what we now need to see are large groups of researchers creating and testing patches to minimize security risks through open source communities.

“There will always be more advanced strategies being developed by attackers, which means security strategies must keep pace.

“To do so we need a new community to fight these next-generation attacks.”

Produced in association with SWNS Talker

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.