Get all your news in one place.
100’s of premium titles.
One app.
Start reading
AAP
AAP
Jennifer Dudley-Nicholson

Deepfake images harming Aussie kids and businesses

Experts are warning about the dangers for children and businesses of AI-generated deepfake images. (James Ross/AAP PHOTOS)

Deepfake images and videos are harming Australian children and businesses and laws are urgently needed to prevent more people being extorted or scammed through the technology, experts say.

The warnings came at the AI Leadership Summit in Melbourne on Monday, with speakers calling for strict technical and legal regulations to govern the use of generative AI.

Experts revealed criminal gangs were using the technology in sexual extortion attempts, while young girls were failing to report AI image-based abuse because they felt regulations could not protect them. 

The summit, hosted by CEDA and the National AI Centre, heard from business, safety, privacy and university experts on artificial intelligence, just weeks after consultation closed on proposed mandatory AI rules.

Deepfake images were singled out as a major concern with generative AI technology and eSafety Commissioner Julie Inman Grant said the technology was already being exploited by criminal organisations. 

"Criminal gangs out of Nigeria and West Africa (are) using face-swapping technology in video-conferencing calls to execute sophisticated sexual extortion schemes targeting young Australian men between the ages of 18 and 24," she said.

"We've seen a four-fold increase in reports since 2018."

eSafety Commissioner Julie Inman Grant
Crime gangs are using deepfake images to exploit people, Julie Inman Grant says. (Mick Tsikas/AAP PHOTOS)

Deepfake images should not just be a concern for individuals, she said, as "vishing" attacks that combined video conference calls with phishing attempts were increasingly targeting business executives.

Deepfake images were being used to bully and harass school children, ThatsMyFace chief executive Nadia Lee said, and they were finding it hard to trust  remedies available to them.

Ms Lee said she recently spoke with a girl in year seven who had been targeted by a year 12 student using AI-generated nude images. 

"He had generated deepfake pornographic images of her, put it on Snapchat, it was live for 24 hours, everybody saw it and it was very traumatic for her," she said. 

"She was very hesitant (about reporting it) and ended up not going forward because she thought, 'as far as I know, I'm the only victim of what he did and if he gets in trouble he will know that it's from me'."

Children and parents needed greater education about generative AI rules and potential resolutions, Ms Lee said, as well as higher levels of trust in the reporting system.

Laws governing AI technology should focus on its potential misuse first, IBM chief privacy and trust officer Christina Montgomery said, to limit harm to people, organisations and elections. 

"(Deepfakes) are one of the most pressing challenges posed by generative AI, particularly given the potential for bad actors to use it to undermine democracy," she said.

"Making the distribution of materially deceptive deepfake content related to elections illegal is one step that governments can take right now to help instil trust."

The federal government recently completed its consultation into mandatory guardrails for AI and received more than 300 submissions. 

An interim report from the Senate's Adopting Artificial Intelligence inquiry recommended laws to restrict deepfake political ads be introduced before the 2029 federal election.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.