Australia needs an AI commissioner to scrutinise the technology's use in businesses and government agencies, an inquiry has heard, and avoid serious harms including Robodebt-like scandals.
The Australian Human Rights Commissioner made the recommendation at an inquiry into artificial intelligence technology in the public sector on Wednesday, warning that without oversight, AI could make biased, incorrect and opaque decisions that unfairly affect citizens.
Unions and data scientists also called for strict AI protections in fields such as recruitment and customer service, although one participant warned that older workers would need to retrain to "stay afloat" in a technology-driven workforce.
The public sector AI inquiry, called in September, is investigating the deployment of the advanced technology in government agencies and departments, including its use to deliver public services.
AI should be strictly regulated to ensure no one was disadvantaged by its use, Human Rights Commissioner Lorraine Finlay told the committee, and the government should appoint a dedicated AI commissioner to oversee the rules.
A commissioner would not only monitor and enforce AI regulations, she said, but could help set restrictions and educate the public, while ensuring organisations did not miss out on opportunities.
"An AI commissioner would potentially ... balance out the need to ensure human rights principles are front and centre, but also recognise in the innovations and productivity gains and efficiency that can be gained through AI," she said.
Mandatory AI restrictions would be most needed in high-risk fields, Ms Finlay said, including law enforcement and government services, and AI tools must be able to clearly explain decisions that affect individuals.
"If AI can't provide reasons for a decision, then we say that is something where it shouldn't be used," she said.
Australian government workers were also deeply concerned about the use of AI tools in job placement and recruitment, Community and Public Sector Union deputy secretary Rebecca Fawcett told MPs.
In a survey of almost 2000 government employees, 85 per cent said they were concerned about the use of AI in industrial relations.
"These tools do a poor job of assessing capabilities and identifying candidates who can perform the role," she said.
"We are aware, for example, in Services Australia, workers with a proven track record being rated unsuitable and ruled out for promotional permanency because their video recording or written application failed to use language that the algorithm was searching for."
Strict guidelines for AI use were needed urgently, Ms Fawcett said, particularly after the Robodebt scandal "highlighted the difficulty of rebuilding public trust".
But rather than focus on AI restrictions, Workday corporate affairs director Eunice Lim said the government should focus on helping employees to retrain for the AI era.
"For older or more mature workers, they understand that AI is going to change the way and the nature of work that they know of today and many are understandably concerned and worried about what this means for them," she said.
"Instead of resisting it, it's time for them to stay ahead of the curve and think about the skills that they will require to stay afloat."
The federal government is considering mandatory AI guardrails following a public consultation that closed in October.