The FBI has declared that artificial intelligence is helping almost every aspect of cybercriminal activity from development to deployment, and the trend looks to be heading in only one direction.
Speaking on a recent media call, an FBI official indicated that free, customizable open source models are proving increasingly popular among hackers trying to spread malware, conduct phishing attacks, and carry out other types of scams.
There has also been a considerable increase in the number of hacker-made AI writers which have been purpose-built to target vulnerable Internet users.
AI could be to blame for rising cyberattacks
Generative AI can help in any (or every) aspect of a cyberattack, not least thanks to its powerful coding abilities. Tens of models have now been trained to help write and fix code, making malware development more accessible to those who might not have had the skill before.
The FBI and other organizations has also seen tools being used to create content, for example phishing emails and other dodgy websites.
Furthermore, with the launch of multimodal models like GPT-4, hackers are able to create convincing deepfakes to pressure victims into parting with sensitive information, payment, and more.
Earlier this year, Meta proclaimed that its new speech-generating tool, Voicebox, should not be available without necessary precautions over concerns that it could do serious harm.
Despite promises to be working with companies to help protect vulnerable citizens, with suggestions including watermarking AI content, many remain concerned over the slow development of protective measures in comparison with the much quicker development of AI tools across the board.
Just last week, the White House announced what it called “voluntary commitment” from leading AI companies - specifically, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI - as part of its agenda for safe and responsible AI.
- Think you’ve been attacked? Check out the best identity theft protection
Via PCMag