Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Technology
Callum Conway-Shaw

Australia May Ask Tech Companies To Label Content Generated By AI Under New Laws

Artificial intelligence will affect up to 40% of jobs worldwide, the International Monetary Fund (IMF) warned this week. (Credit: Lionel BONAVENTURE/AFP)

Australia may push through a new law forcing tech companies to watermark or label content generated by artificial intelligence, as its federal government tries to tackle "high-risk" AI products evolving faster than legislation.

A recent consultation process on safe and responsible AI use in Australia found adopting AI and automation could increase the country's GDP by up to $600 billion a year.

However, the government's response to the research also notes rising public concern about the technology, which may lead to stricter regulation being introduced.

The industry and science minister, Ed Husic, said that while the government wanted to see "low-risk" uses of AI continue to flourish, some applications – such as self-driving cars or generative AI software - needed new and stricter regulation.

"Australians understand the value of artificial intelligence but they want to see the risks identified and tackled," Husic said. "We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI."

With this in mind, the consultation paper also refers to surveys showing only a third of Australians believe there are adequate "guardrails" for the design and development of AI.

It reads: "While AI is forecast to grow our economy, there is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly."

In response to this, the federal government have pledged to immediately set up an expert advisory group on the development of AI policy, including further guardrails; develop a voluntary "AI safety standard" as a single source for businesses wanting to integrate AI tech into their systems; and start consulting with industry on new transparency measures.

Australia would become the latest nation to adopt specific AI laws, after the world's first AI conference, hosted by UK Prime Minister Rishi Sunak in November last year, debated the idea of stricter regulation.

In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.

The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".

The European Union finalised the world's first 'AI Law' in December, which will aim to regulate systems based on the level of risk they pose.

Negotiations on the final legal text began in June, but a fierce debate over how to regulate general-purpose AI like ChatGPT and Google's Bard chatbot threatened talks at the last minute.

Under the Canberra government's plan, announced today, safeguards would be applied to technologies that predict the chances of someone again committing a crime, or that analyze job applications to find a well-matched candidate.

Australian officials have said that new laws could also mandate that organizations using high-risk AI must ensure a person is responsible for the safe use of the technology.

Husic told the Australian Broadcasting Corp. that he wants AI-generated content to be labelled so it can't be mistaken as genuine.

"We need to have confidence that what we are seeing we know exactly if it is organic or real content, or if it has been created by an AI system."

"And, so, the industry is just as keen to work with government on how to create that type of labelling," he said. "More than anything else, I am not worried about the robots taking over, I'm worried about disinformation doing that. We need to ensure that when people are creating content it is clear that AI has had a role or a hand to play in that."

A new report published this week by the World Economic Forum claimed AI-powered misinformation is the world's biggest short-term threat.

With 2024 being dubbed by many as "the year of elections", their Global Risks Report expressed fears that a wave of artificial intelligence-driven misinformation and disinformation could influence democratic processes and polarise society.

Such a threat is the most immediate risk to the global economy, the document, released annually, concluded.

The issue of copyright is also becoming a major battleground for the much-hyped generative AI sector, with publishers, musicians and artists increasingly lawyering up to get paid for technology that is being built with their content.

The New York Times sued ChatGPT-maker OpenAI and Microsoft in a US court last month, alleging that the companies' powerful AI models used millions of articles for training without permission.

Through their AI chatbots, the companies "seek to free-ride on The Times' massive investment in its journalism by using it to build substitutive products without permission or payment", the lawsuit said.

Meanwhile, ​​a US judge ruled last year that an artwork created without human interaction by an AI could not be copyrighted under US law as human authorship was a "bedrock requirement of copyright".

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.