Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Politics
Dan Milmo Global technology editor

Paedophiles using open source AI to create child sexual abuse content, says watchdog

Dan Sexton, chief technology officer of the Internet Watch Foundation
‘They’re sharing guides and tips:’ Dan Sexton, chief technology officer of the Internet Watch Foundation. Photograph: Graeme Robertson/The Guardian

Freely available artificial intelligence software is being used by paedophiles to create child sexual abuse material (CSAM), according to a safety watchdog, with offenders discussing how to manipulate photos of celebrity children or known victims to create new content.

The Internet Watch Foundation said online forums used by sex offenders were discussing using open source AI models to create fresh illegal material. The warning came as the chair of the government’s AI taskforce, Ian Hogarth, raised concerns about CSAM on Tuesday as he told peers that open source models were being used to create “some of the most heinous things out there”.

Open source AI technology can be downloaded and adjusted by users, as opposed to closed model tools such as OpenAI’s Dall-E or Google’s Imagen whose underlying models – which underpin the creation of images – cannot be accessed or changed by members of the public.

Dan Sexton, chief technology officer at the Internet Watch Foundation, told the Guardian paedophile discussion forums on the dark web were discussing matters such as which open source models to use and how to achieve the most realistic images.

“There’s a technical community within the offender space, particularly dark web forums, where they are discussing this technology. They are sharing imagery, they’re sharing [AI] models. They’re sharing guides and tips.”

He added: “The content that we’ve seen, we believe is actually being generated using open source software, which has been downloaded and run locally on people’s computers and then modified. And that is a much harder problem to fix.

It’s been taught what child sexual abuse material is, and it’s been taught how to create it.”

The discussions include using images of celebrity children, publicly available images of children or images of known child abuse victims to create new abuse content. “All of these ideas are concerns and we have seen discussions about them,” said Sexton.

According to forum discussions seen by the IWF, offenders start with a basic source image generating model that is trained on billions and billions of tagged images, enabling them to carry out the basics of image generation. This is then fine-tuned with CSAM images to produce a smaller model using low-rank adaptation, which lowers the amount of compute needed to produce the images.

Asked if the IWF, which searches for CSAM and coordinates its removal as well as operating a hotline for tipoffs, could be overwhelmed by AI-made material, Sexton said: “Child sexual abuse online is already, as we believe, a public health epidemic. So this is not going to make the problem any better. It’s only going to potentially make it worse.”

An employee at work at the Internet Watch Foundation
The IWF searches for child sexual abuse material and coordinates its removal as well as operating a hotline for tip-offs. Photograph: Graeme Robertson/The Guardian

Law enforcement and child safety experts fear that photorealistic images of CSAM, which are illegal in the UK, will make it more difficult to identify and help real-life victims. They are also concerned that the sheer potential volume of such imagery could make it more widely consumed.

In June the BBC reported that Stable Diffusion, an open source AI image generator, was being used to create abuse images from text prompts typed in by humans. Sexton said Stable Diffusion had been discussed in online offender communities.

Stability AI, the UK company behind Stable Diffusion, told the BBC it “prohibits any misuse for illegal or immoral purposes across our platforms, and our policies are clear that this includes CSAM”.

The IWF warned in June that AI-generated material was emerging online. It investigated 29 reports of webpages containing suspected AI-made material over a five-week period this summer and found that seven of them contained AI-generated CSAM material.

Andrew Rogoyski, of the Institute for People-Centred AI at the University of Surrey, said: “Open source AI is important to democratising AI, ensuring that this powerful technology isn’t controlled by a handful of very large corporates. The downside of making AI software freely available is that there are people will misuse the technology.”

However, he added that open source software could in turn provide a solution because it could be adapted.

A UK government spokesperson said AI-generated CSAM would be covered by the forthcoming online safety bill and social media platforms would be required to prevent it from appearing on platforms.

Speaking at a House of Lords communications and digital committee meetying on Tuesday, Hogarth said dealing with the issue of open source as opposed to closed source systems was a big challenge.

He said closed source systems had issues with a lack of transparency about their contents and their potential for damaging competition, while there were concerns about “irreversible proliferation” of open source models. Hogarth referred to concerns over CSAM generation and added that deployment of open source models could not be reversed.

“Once it’s out you can’t put it back in the jar. And it makes it harder to do precautionary deployment of certain things.”

• This article was amended on 13 September 2023 to include images commissioned for the article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.