Australia is at the back of the pack when it comes to artificial intelligence funding and regulation, experts warn, as the government considers its next steps after the Bletchley declaration in the UK last week.
Australia was one of 28 countries and the EU to sign a declaration that AI presented a catastrophic danger to humanity and should be designed and developed safely and responsibly.
It came just a week after the Biden administration released an executive order requiring US companies such as OpenAI and Google to share their safety test results with the government before releasing AI models.
The Albanese government launched an inquiry this year to examine what regulations might be needed to ensure the safe development of AI, the relevance of existing legislation and what new laws might be needed.
The inquiry received 510 submissions from industry players, including from Meta, Google, Amazon Web Services, OpenAI and Microsoft, and sectors that will be affected by AI, including legal firms, healthcare organisations, business groups, banks, supermarkets, and film, music and television companies.
It has been more than a month since the submissions were published but the government has yet to announce any response.
On the ABC’s AM program on Friday, the industry minister, Ed Husic, would not state when legislation was likely to be presented to parliament.
“We’re working through [the inquiry] and obviously the summit itself gives us the ability to take onboard some of the pathways that are being described and the way in which we can work together on that,” he said. “So we’re hoping to get that done in the very near future. But clearly taking onboard at the conclusion of the summit today … some things that we can factor in – particularly around safety testing of AI locally.”
The chief scientist at the University of New South Wales’s AI institute, Toby Walsh, said the declaration itself would have a modest impact and work to build some international consensus, but the Biden executive order would have a much greater impact and Australia now risked being left behind.
“Australia sadly remains at the back of the pack in terms of responding to the opportunities and risks AI poses,” he said.
“Over 500 groups and individuals submitted evidence to this inquiry. There is no shortage of ideas for action. There is, however, a distinct lack of action so far from the government, especially in the inadequate level of funding for AI.”
“There is a formidable amount of work to do to ensure that the declaration has the desired impact,” said Prof Paul Salmon, co-director of the University of the Sunshine Coast’s Centre for Human Factors and Sociotechnical Systems.
“There is a lack of detail regarding key aspects, such as regulation and the metrics and tools to be used. How we can ensure that different nations apply consistent and valid methods when identifying and addressing risks is not yet clear.”
Dr Rebecca Johnson, generative AI ethics researcher at the University of Sydney, said Australia had to decide whether to adopt the US model of a “comprehensive and nuanced” approach to AI or the Bletchley declaration, which was “steeped in existential risk rhetoric”.
Husic said it was important to get the balance right: “At its best, AI can help improve the quality of life and improve the way our economy functions. At its worst, it can distort and divide. I don’t think really that the worry is that the robots will take over, but that potentially AI-generated misinformation will.
“And so, we recognise the risks that are there. If we get this right, we build trust in the use of the technology. That’ll be really good but stamping out some of these things will be really important as well.”
Guardian Australia sent questions to Husic about the next steps in the AI regulatory process but did not receive a response.