Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Technology
Kelly Burke

Amazon, Google and Meta are ‘pillaging culture, data and creativity’ to train AI, Australian inquiry finds

Google company headquarters in California.
The report found that creative workers were at the most imminent risk of AI severely impacting their livelihoods. Photograph: Marcio José Sánchez/AP

Tech companies Amazon, Google and Meta have been criticised by a Senate select committee inquiry for being especially vague over how they used Australian data to train their powerful artificial intelligence products.

Labor senator Tony Sheldon, the inquiry’s chair, was frustrated by the multinationals’ refusal to answer direct questions about their use of Australians’ private and personal information.

“Watching Amazon, Meta, and Google dodge questions during the hearings was like sitting through a cheap magic trick – plenty of hand-waving, a puff of smoke, and nothing to show for it in the end,” Sheldon said in a statement, after releasing the final report of the inquiry on Tuesday.

He called the tech companies “pirates” that were “pillaging our culture, data, and creativity for their gain while leaving Australians empty-handed.”

The report found some general-purpose AI models – such as OpenAI’s GPT, Meta’s Llama and Google’s Gemini – should automatically default to a “high risk” category, and be subjected to mandated transparency and accountability requirements.

Several key themes emerged during the inquiry and in its report.

Standalone AI laws needed

Sheldon said Australia needed “new standalone AI laws” to “rein in big tech” and that existing laws should be amended as necessary.

“They want to set their own rules, but Australians need laws that protect rights, not Silicon Valley’s bottom line,” he said.

He said Amazon had refused during the inquiry to disclose how it used data recorded from Alexa devices, Kindle or Audible to train its AI.

Google too, he said, had refused to answer questions about what user data from its services and products it used to train its AI products.

Meta admitted it had been scraping from Australian Facebook and Instagram users since 2007, in preparation for future AI models. But the company was unable to explain how users could consent for their data to be used for something that did not exist in 2007.

Sheldon said Meta dodged questions about how it used data from its WhatsApp and Messenger products.

AI ‘high risk’ for creative workers

The report found that creative workers were at the most imminent risk of AI severely affecting their livelihoods.

It recommended payment mechanisms be put in place to compensate creatives when AI-generated work was based on their original material.

Developers of AI models needed to be transparent about the use of copyrighted works in their datasets, the report said. Any declared work should be licensed and paid for.

Among the report’s 13 recommendations is the call for the introduction of standalone AI legislation to cover AI models deemed “high risk”.

AI that impacts on people’s rights at work should be designated high-risk, meaning consultation, cooperation and representation before being adopted.

The music rights management organisation Apra Amcos said the report recognised the detrimental impact of AI on workers, particularly in the creative sector. It said the report’s recommendations proposed “clear steps” to mitigate the risks.

The Media Entertainment and Arts Alliance said the report’s call for the introduction of legislation to establish an AI Act was “clear and unambiguous”.

Don’t suffocate AI with red tape

The two Coalition members on the committee, senators Linda Reynolds and James McGrath, said AI posed a greater threat to Australia’s cybersecurity, national security and democratic institutions than the creative economy.

They said mechanisms needed to be put in place “without infringing on the potential opportunities that AI presents in relation to job creation and productivity growth”.

They did not accept the report’s conclusion that all uses of AI by “people at work” should be automatically categorised “high-risk”.

Additional comments by the Greens argued the final report did not go far enough.

“[The report] does not recommend an overarching strategy that would bring Australian regulation of AI into line with the UK, Europe, California or other jurisdictions,” the party said.

The Guardian approached Amazon, Google and Meta for comment.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.