Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

AI-generated child sexual abuse imagery reaching ‘tipping point’, says watchdog

view of the headquarters of the Internet Watch Foundation with its logo in white on a large glass window
The Internet Watch Foundation, a charity based in Cambridge, UK, but with a global remit, said most of the content was accessible on the open web. Photograph: Graeme Robertson/The Guardian

Child sexual abuse imagery generated by artificial intelligence tools is becoming more prevalent on the open web and reaching a “tipping point”, according to a safety watchdog.

The Internet Watch Foundation said the amount of AI-made illegal content it had seen online over the past six months had already exceeded the total for the previous year.

The organisation, which runs a UK hotline but also has a global remit, said almost all the content was found on publicly available areas of the internet and not on the dark web, which must be accessed by specialised browsers.

The IWF’s interim chief executive, Derek Ray-Hill, said the level of sophistication in the images indicated that the AI tools used had been trained on images and videos of real victims. “Recent months show that this problem is not going away and is in fact getting worse,” he said.

According to one IWF analyst, the situation with AI-generated content was reaching a “tipping point” where safety watchdogs and authorities did not know if an image involved a real child needing help.

The IWF took action against 74 reports of AI-generated child sexual abuse material (CSAM) – which was realistic enough to break UK law – in the six months to September this year, compared with 70 over the 12 months to March. One single report could refer to a webpage containing multiple images.

As well as AI images featuring real-life victims of abuse, the types of material seen by the IWF included “deepfake” videos where adult pornography had been manipulated to resemble CSAM. In previous reports the IWF has said AI was being used to create images of celebrities who have been “de-aged” and then depicted as children in sexual abuse scenarios. Other examples of CSAM seen have included material for which AI tools have been used to “nudify” pictures of clothed children found online.

More than half of the AI-generated content flagged by the IWF over the past six months is hosted on servers in Russia and the US, with Japan and the Netherlands also hosting significant amounts. Addresses of the webpages containing the imagery are uploaded to an IWF list of URLs which is shared with the tech industry so they can be blocked and rendered inaccessible.

The IWF said eight out of 10 reports of illegal AI-made images came from members of the public who had found them on public sites such as forums or AI galleries.

Meanwhile, Instagram has announced new measures to counteract sextortion, where users are tricked into sending intimate images to criminals, typically posing as young women, and then subjected to blackmail threats.

The platform will roll out a feature that blurs any nude images users are sent in direct messages, and urges them to be cautious about sending any direct message (DM) that contains a nude image. Once a blurred image is received the user can choose whether or not to view it, and they will also receive a message reminding them that they have the option to block the sender and report the chat to Instagram.

The feature will be turned on by default for teenagers’ accounts globally from this week and can be used on encrypted messages, although images flagged by the “on device detection” feature will not be automatically notified to the platform itself or authorities.

It will be an opt-in feature for adults. Instagram will also hide follower and following lists from potential sextortion scammers who are known to threaten to send intimate images to those accounts.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.