A cluster of malicious ChatGPT accounts linked to a notorious cybercrime group have been disabled by OpenAI after they were found to be attempting to spread content aimed at influencing voters.
The content posted discussed a number of topics, particularly focusing on the US election, Israel’s presence at the Olympic Games, and the conflict in Gaza. In its report, OpenAI said the content is said to have failed to achieve any meaningful engagement, with most posts receiving very few (if any) likes.
The ChatGPT-generated content was also found to include long-form articles which posed as both progressive and conservative news sites, using handles such as ‘Westland Sun, EvenPolitics, and Nio Thinker’.
Election threats
"OpenAI is committed to preventing abuse and improving transparency around AI-generated content," OpenAI noted. "This includes our work to detect and stop covert influence operations (IO), which try to manipulate public opinion or influence political outcomes while hiding the true identity or intentions of the actors behind them. This is especially important in the context of the many elections being held in 2024. We have expanded our work in this area throughout the year, including by leveraging our own AI models to better detect and understand abuse."
The group behind the campaign, Storm-2035, was recently identified by Microsoft as a threat activity cluster in a recent report which investigated online Iranian influence in US elections.
Microsoft had described the campaign as, "actively engaging U.S. voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict".
Microsoft’s Threat Analysis Center (MTAC) predicted earlier this year that Iran, along with Russia and China, would escalate their cyber influence campaigns as the US election approached.
As we near the US 2024 Presidential election, an uptick in malicious cyber activity from foreign threat actors has already been reported. Various tactics have been employed, such as misinformation campaigns, phishing attacks, and hacking operations.
The aims of these offensives seem clear: disrupting the political process. By undermining public trust in information sources, public figures, and political structures, foreign threat actors target the fabric of the American political system. Spreading distrust, chaos, and fear into the hearts of voters works to further the widespread division which already afflicts the American public.
The rise of Artificial intelligence has enabled misinformation to be developed and spread with ease, with highly-tailored content generated more than ever before. Our advice is to stay critical and examine the source where possible.
More from TechRadar Pro
- Bad bots made up almost a third of all internet traffic last year
- Check out our pick of the best AI tools around today
- Stay safe online with the best endpoint protection tools on offer