Artificial intelligence researchers have taken significant steps to address the issue of suspected child sexual abuse imagery in AI image-generator tools. More than 2,000 web links to such content were removed from the LAION research database, a key resource for training popular AI image-making tools like Stable Diffusion and Midjourney.
A report by the Stanford Internet Observatory last year revealed that the database contained links to sexually explicit images of children, facilitating the creation of photorealistic deepfakes depicting children by some AI tools. In response, LAION, the Large-scale Artificial Intelligence Open Network, promptly removed the dataset following the report.
After eight months of collaboration with Stanford University and anti-abuse organizations in Canada and the United Kingdom, LAION announced in a blog post that they have rectified the issue and released a cleaned-up database for future AI research.
While acknowledging the improvements made by LAION, Stanford researcher David Thiel emphasized the need to withdraw the 'tainted models' capable of producing child abuse imagery. One of the identified tools, an older version of Stable Diffusion, was removed from the AI model repository Hugging Face by Runway ML, citing a planned deprecation of outdated research models and code.
The updated LAION database release coincides with increased scrutiny by governments worldwide on the misuse of tech tools for creating or distributing illegal images of children. San Francisco's city attorney recently filed a lawsuit to shut down websites enabling the creation of AI-generated nudes of women and girls. Additionally, French authorities pressed charges against Telegram's founder and CEO, Pavel Durov, in connection with the alleged distribution of child sexual abuse images on the messaging app.