Microsoft recently revealed that U.S. adversaries, particularly Iran and North Korea, along with Russia and China to a lesser extent, are utilizing generative artificial intelligence (AI) in offensive cyber operations. In collaboration with OpenAI, Microsoft detected and disrupted threats that attempted to exploit AI technology. While these techniques are still in their early stages and not particularly groundbreaking, the company emphasized the importance of exposing them publicly, as rival nations leverage large-language models to breach networks and carry out influence operations.
Traditionally, cybersecurity firms have employed machine learning for defense purposes, primarily to identify anomalous behavior in networks. However, criminals and offensive hackers are also harnessing the power of AI, and the introduction of large-language models, such as OpenAI's ChatGPT, has raised the stakes in this cat-and-mouse game.
Microsoft has made significant investments in OpenAI, and their recent announcement coincided with a report that highlights how generative AI is poised to enhance malicious social engineering techniques, leading to more sophisticated deepfakes and voice cloning. This presents a potential threat to democracy, especially in a year when more than 50 countries are scheduled to hold elections, amplifying the impact of disinformation campaigns that are already underway.
Microsoft cited several examples to illustrate the use of generative AI by different adversarial groups. For instance, the North Korean cyberespionage group known as Kimsuky employed these models to gather information on foreign think tanks studying the country and to generate content for spear-phishing hacking campaigns. Iran's Revolutionary Guard used large-language models for social engineering, troubleshooting software errors, and studying techniques to evade detection in compromised networks. The Russian GRU military intelligence unit, Fancy Bear, explored satellite and radar technologies related to conflicts like the war in Ukraine. Meanwhile, Chinese cyberespionage groups Aquatic Panda and Maverick Panda engaged with the models, suggesting an exploration of how they could enhance their technical operations and gather sensitive information on various topics.
OpenAI, in a separate blog post, mentioned that its current GPT-4 model chatbot offers only limited capabilities for malicious cybersecurity tasks beyond what can be achieved with publicly available, non-AI powered tools. However, cybersecurity researchers anticipate that this will change in the future.
Last year, the director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, emphasized that China and artificial intelligence are two significant threats and challenges that the country needs to address. As the use of AI and large-language models evolves, Easterly stressed the importance of developing them with security in mind.
Critics have voiced concerns about the hasty release of large-language models like ChatGPT by companies such as Microsoft, Google, and Meta. They argue that security considerations were largely an afterthought during their development, resulting in potential risks. Some cybersecurity professionals question Microsoft's focus on creating tools to address vulnerabilities in these models, suggesting that the company should prioritize making them more secure from the outset.
Experts warn that while the immediate threats posed by the utilization of AI and large-language models may not be apparent, they have the potential to become significant weapons in the arsenal of every nation-state military in offensive cyber operations. As technologies advance, it is crucial for companies, governments, and cybersecurity experts to stay vigilant and work towards responsible development and deployment of AI to mitigate potential risks and safeguard critical systems.