Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Roshan Ashraf Shaikh

Chinese and Iranian hackers use ChatGPT and LLM tools to create malware and phishing attacks — OpenAI report has recorded over 20 cyberattacks created with ChatGPT

Stock image of a digital skull in code.

If there's one sign that AI is more trouble than it is worth, OpenAI confirms that over twenty cyberattacks have occurred, all created via ChatGPT. The report confirms that generative AI was used to conduct spear-phishing attacks, debug and develop malware, and conduct other malicious activity.

The report confirms two cyberattacks using the generative AI ChatGPT. Cisco Talos reported the first in November 2024, which was used by Chinese threat actors who targeted Asian governments. This attack used a spear phishing method called 'SweetSpecter,' which includes a ZIP file with a malicious file that, if downloaded and opened, would create an infection chain on the user's system. OpenAI discovered that SweetSpecter was created using multiple accounts that used ChatGPT to develop scripts and discover vulnerabilities using an LLM tool.

The second AI-enhanced cyberattack was from an Iran-based group called 'CyberAv3ngers' that used ChatGPT to exploit vulnerabilities and steal user passwords from macOS-based PCs. The third attack, led by another Iran-based group called Storm-0817, used ChatGPT to develop malware for Android. The malware stole contact lists, extracted call logs and browser history, got the device's precise location, and accessed files on the infected devices.

All these attacks used existing methods to develop malware, and according to the report, there has been no indication that ChatGPT created substantially new malware. Regardless, it shows how easy it is for threat actors to trick generative AI services into creating malicious attack tools. It opens a new can of worms, showing it is easier for anyone with the required knowledge to trigger ChatGPT to make something with evil intent. While there are security researchers who discover such potential exploits to report and have them patched, attacks like this would create the need to discuss implementation limitations on generative AI.

As of now, OpenAI concludes that it will continue to improve its AI to prevent such methods from being used. In the meantime, it will work with internal safety and security teams. The company also said it will continue to share its findings with industry peers and the research community to prevent such a situation from happening.

Though this is happening with OpenAI, it would be counterproductive if major players with their own generative AI platforms did not use protection to avoid such attacks. However, knowing that it is challenging to prevent such attacks, respective AI companies need safeguards to prevent issues rather than cure them.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.