Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Technology
Nick Robins-Early

US man used AI to generate 13,000 child sexual abuse pictures, FBI alleges

The seal of the FBI/Department of Justice
The deputy attorney general of the justice department said the department will pursue creators of CSAM ‘no matter how that material was created’. Photograph: Charlie Neibergall/AP

The FBI has charged a US man with creating more than 10,000 sexually explicit and abusive images of children, which he allegedly generated using a popular artificial intelligence tool. Authorities also accused the man, 42-year-old Steven Anderegg, of sending pornographic AI-made images to a 15-year-old boy over Instagram.

Anderegg crafted about 13,000 “hyper-realistic images of nude and semi-clothed prepubescent children”, prosecutors stated in an indictment released on Monday, often images depicting children touching their genitals or being sexually abused by adult men. Evidence from the Wisconsin man’s laptop allegedly showed he used the popular Stable Diffusion AI model, which turns text descriptions into images.

Anderegg’s charges came after the National Center for Missing & Exploited Children (NCMEC) received two reports last year that flagged his Instagram account, which prompted law enforcement officials to monitor his activity on the social network, obtain information from Instagram and eventually obtain a search warrant. Authorities seized his laptop and found thousands of generative AI images, according to the indictment against him, as well as a history of using “extremely specific and explicit prompts” to create abusive material.

Anderegg faces four counts of creating, distributing and possessing child sexual abuse material and sending explicit material to a child under 16. If convicted, he faces a maximum sentence of about 70 years in prison, with 404 Media reporting that the case is one of the first times the FBI has charged someone with generating AI child sexual abuse material. Last month, a man in Florida was arrested for allegedly taking a picture of his neighbor’s child and using AI to create sexually explicit imagery with the photo.

Child safety advocates and artificial intelligence researchers have long warned that the malicious use of generative AI could lead to a surge in child sexual abuse material. Reports of online child abuse to the NCMEC rose about 12% in 2023 from the previous year, in part due to a sharp increase in AI-made material, threatening to overwhelm the organization’s tip line for flagging potential child sexual abuse material (CSAM).

“The NCMEC is deeply concerned about this quickly growing trend, as bad actors can use artificial intelligence to create deepfaked sexually explicit images or videos based on any photograph of a real child or generate CSAM depicting computer-generated children engaged in graphic sexual acts,” the NCMEC’s report read.

The boom in generative AI has led to the widespread creation of nonconsensual deepfake pornography, which has targeted anyone from A-list celebrities to average, private citizens. AI-generated images and deepfakes of minors have also circulated among schools, in one case leading to the arrest of two middle school boys in Florida who created nude images of their classmates. Several states have passed laws against the non-consensual generation of explicit images, while the Department of Justice has said that generating sexual AI images of children is illegal.

“The justice department will aggressively pursue those who produce and distribute child sexual abuse material – or CSAM – no matter how that material was created,” the deputy attorney general, Lisa Monaco, said in a statement after the arrest. “Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive and increasingly photorealistic images of children.”

Stable Diffusion, which is an open-source artificial intelligence model, has previously been used to generate sexually abusive images and modified by users to produce explicit material. A report last year from the Stanford Internet Observatory also found that there was child sexual abuse material in its training data. Stability AI, which created Stable Diffusion, has said it forbids the use of its model for creating illegal content.

Stability AI, the UK company behind the wide release of Stable Diffusion, said that it believed the AI model used in this case was an earlier version of the model which was originally created by the startup RunwayML. Stability AI claimed that since it took over the development of Stable Diffusion models in 2022, it has implemented more safeguards in the tool. The Guardian has contacted RunwayML for comment.

“Stability AI is committed to preventing the misuse of AI and prohibit the use of our image models and services for unlawful activity, including attempts to edit or create CSAM,” the company said in a statement.

  • In the US, call or text the Childhelp abuse hotline on 800-422-4453 or visit their website for more resources and to report child abuse or DM for help. For adult survivors of child abuse, help is available at ascasupport.org. In the UK, the NSPCC offers support to children on 0800 1111, and adults concerned about a child on 0808 800 5000. The National Association for People Abused in Childhood (Napac) offers support for adult survivors on 0808 801 0331. In Australia, children, young adults, parents and teachers can contact the Kids Helpline on 1800 55 1800, or Bravehearts on 1800 272 831, and adult survivors can contact Blue Knot Foundation on 1300 657 380. Other sources of help can be found at Child Helplines International

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.