A new law will aim to crack down on “vile” sexual abuse deepfakes after reports of AI-generated child sexual abuse imagery doubled over the last year.
The fresh legislation, which the government will table on Wednesday as an amendment to the Crime and Policing Bill, will require artificial intelligence models to put in place safeguards to ensure their technology cannot be used to create child sexual abuse material (CSAM).
It comes as data from child abuse charity the Internet Watch Foundation (IWF) shows reports of AI-generated CSAM have risen twofold over the last year - from 199 in 2024 to 426 in 2025.
The charity added there has been a “disturbing” rise in generated images of the very youngest children, with depictions of 0–2-year-olds surging from five to 92 over the last year.

In their ‘trends of AI-Generated CSAM’ research, the IWF found AI-generated material reported is also getting more extreme. Category A images - the most serious kind involving penetrative sexual activity, images involving sexual activity with an animal, or sadism - now make up more than half the material, as compared to 41 per cent last year.
It added girls were “overwhelmingly” targeted, making up 94 per cent of illegal AI-generated images reported.
The charity welcomed the government’s announcement, which it said formed a “vital step” towards ensuring AI products are safe before they are released.
Kerry Smith, IWF’s chief executive, said: “AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material.

“Safety needs to be baked into new technology by design. Today’s announcement could be a vital step to make sure AI products are safe before they are released.”
Proposed new rules will allow the technology and home secretaries to designate “authorised testers” including AI developers and child protection organisations such as the IWF. These bodies will be “empowered” to scrutinise AI models in a bid to proactively ensure they cannot be abused by those looking to exploit children, the government said.
Currently, developers are unable to carry out safety testing on AI models due to the illegal nature of the images involved. This means images can only be removed after they have been created and shared online.
In a “landmark conviction” last year, 27-year-old Hugh Nelson was jailed for 18 years after using AI modelling software Daz 3D to turn legitimate pictures of real children into indecent images.
Taking commission from predators online, Nelson created hundreds of illegal images using a plugin that allowed him to transfer real faces onto AI models.
As part of the new legislation, the government said it would also create a group of experts in AI and child safety to design the safeguards needed to protect sensitive data and prevent any risk of illegal content being leaked.
Technology secretary Liz Kendall said the government would “not allow” technological advancement to outpace children’s safety.
“These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk,” she said. “By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought."
Jess Phillips, minister for safeguarding and violence against women and girls, said the measures would stop legitimate AI tools being used to create “vile” material.