Malign actors could harness artificial intelligence to boost the quality and effectiveness of disinformation spread online, an Australian intelligence agency warns.
Generative AI creates text, images or other media by drawing from immense amounts of data and has been touted as a transformative technology - for good and bad.
Its productivity-generating potential has driven a flurry of interest across industries, including those who wish to sow discontent in Australian society, fears Andrew Shearer, head of the Office of National Intelligence.
"Generative AI has significant potential actually to assist us in our work, but it also poses a range of threats and challenges," he told a parliamentary hearing on Wednesday.
"For example, it could be a powerful aid to either state or non-state actors who are looking to foment disinformation."
Most worryingly for the intelligence community is the technology's increasing sophistication.
"It could vastly increase the quantity of disinformation and the speed with which disinformation is propagated and also, and I think this is a particular concern, the quality of disinformation," Mr Shearer said.
"A lot of disinformation currently is of indifferent quality and we know that it doesn't get much traction in the intended target audience, but AI could help malign actors who are looking to disseminate disinformation do that better, more often and more more quickly."
Mr Shearer said the agency was looking to incorporate AI into its own work but was wary of building in ethical oversight into the way it uses it.
"So we are looking to understand AI, we are partnering with our closest international partners to learn lessons from them on AI and how they're adopting it," he said.
"But we will take a step-by-step approach to make sure we have the correct framework in place before we make more significant use of AI as an organisation."
Australia's Electoral Commissioner has warned generative AI could threaten the nation's democratic process, admitting the commission did not have the tools or laws to tackle artificial disinformation online.
Commissioner Tom Rogers told a Senate inquiry earlier in May that deceptive AI material had been detected in recent election campaigns in the US, Indonesia, Pakistan and India, and Australian voters should "expect things like that to occur at the next election".