More than 200 child development experts and advocacy groups have called on Google to prohibit AI-generated content from being recommended to children, warning that the mass-produced videos could cause long-term developmental harm.
In an open letter sent to Alphabet CEO Sundar Pichai and YouTube CEO Neal Mohan Wednesday, the coalition urged the tech giant to stop hosting synthetic media on its YouTube Kids platform and to halt algorithmic recommendations of such content to all users under 18.
The signatories, which include the American Federation of Teachers and social psychologist Jonathan Haidt, author of The Anxious Generation, described the proliferation of low-quality synthetic videos as an “uncontrolled experiment” on the world’s youngest viewers.
“The potential consequences of forcing AI content on children are varied, and there is much we don’t know about the consequences of AI content for children,” the group wrote. “Regardless, it has proliferated rapidly without any research or regulation,” the letter continued.
The group used the term “AI slop” — a phrase popularized in recent years to describe low-quality, high-volume synthetic media — to highlight a growing trend of bizarre, often plotless videos created using generative artificial intelligence. They argue these clips are designed to “hijack” children’s attention spans through “zombifying animations” and sensory-heavy visuals that displace real-world social interaction.
“YouTube is participating in this uncontrolled experiment by pushing AI-generated content without research demonstrating its benefits and without acknowledging the child development principles that tell us it’s likely mostly harmful,” the letter stated.
The advocates highlighted financial incentives as a primary driver for the content, noting that some creators earn millions of dollars annually from “plotless, mesmerizing AI content.” Research cited in the letter suggested that after viewing popular preschool shows, up to 40 percent of subsequent recommendations for children can contain AI-generated material.
Particular concern was raised regarding “AI slop” that bypasses filters or appears in search results for educational topics. The group pointed to a 2025 investigation that found AI-generated animal torture videos appearing under kid-friendly tags such as “#familyfun.”
While YouTube currently requires creators to label "altered and synthetic content," the coalition dismissed these measures as insufficient.
“The phrase ‘altered and synthetic content’ is also unlikely to be understood by the preliterate children who are targets for much of this AI slop and aren’t even able to read the disclosures,” the group wrote.
In response, YouTube spokesperson Boot Bullwinkle said in a statement that the company maintains "high standards" for its Kids app, limiting AI content to a “small set of high-quality channels.”
“Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content,” he said. “We’re always evolving our approach to stay current as the ecosystem evolves.”
He added that parents have the option to block specific channels.

The pressure comes at a difficult time for Google’s regulatory standing. In March, a landmark jury trial found Google and Meta liable for harming a young user through addictive product design — a verdict both companies intend to appeal.
The letter concludes by demanding that Google “halt all investment in the creation of AI-generated videos for children.” It specifically references the company’s recent backing of Animaj, an AI animation studio that produces videos for young children, including babies and toddlers.
“If Google wants to continue marketing YouTube and YouTube Kids to children, it is the company’s responsibility to ensure that its platforms are safe and developmentally appropriate,” the letter stated.