
For decades, the world followed a safe pattern: technology was invented, people used it and then — eventually — the government regulated it. With AI, at least in the U.S. that safety net has collapsed.
Today, children are interacting with AI at every turn — through chatbots, "educational" apps, toys and YouTube channels. But while the colors are bright and the songs are catchy, there is a dark side to this automated content that most parents haven't noticed yet.
The rise of 'AI slop'

Investigators are sounding the alarm on a new phenomenon called AI Slop. These are mass-produced videos generated in seconds with AI by algorithms designed to do one thing: keep kids watching. As a mom of young kids, I am all too familiar with this problem since it isn't just pulling screens out of our kids' hands, it's AI-generated content flooding YouTube.
A recent report from the New York Times found AI-generated content showing characters walking into traffic or ignoring basic safety rules. Even "educational" facts that are completely made up (AI hallucinations). These surreal, disturbing imagery that blurs the line between reality and fiction.
Why this is different than "bad TV"

I grew up in the 1980s and my parents owned video stores (long before Blockbuster existed). I definitely saw my fair share of inappropriate content, mostly by default of being in the same environment of movies like "Sixteen Candles," "Police Academy" and "Splash."
In the past, even the cheapest cartoons required a "human bottleneck." A writer had to script it; an editor had to check it. There was a layer of human judgment.
But AI has removed the human checkpoint. Now, a single user can flood a platform with thousands of videos in minutes. There is no teacher reviewing the lesson, no editor checking the message and no moral compass guiding the output. Just infinite, automated "engagement."
The 'credibility' trap

Children are developmentally vulnerable to AI because they can't yet distinguish between a real person and a confident-sounding AI. Plus, an authority bias exists. If a character looks like a "teacher" or a "police officer," a child assumes that what they say is true. Especially if they go from shows with a real script like Bluey and Sesame Street.
But when it comes to YouTube, shows "of value" are in the same algorithm as those created by AI. For a kid just scrolling, that line is blurred. AI doesn't optimize for accuracy; it optimizes for the "click." If a dangerous video performs well, the system will show it to millions more.
The takeaway
As parents, we are already juggling a lot. I'm guilty of handing my kid a tablet and saying, "Just give me 15 minutes." I could parent without screentime, but I could also churn my own butter. Neither of which is happening. But this is a wake up call to at least know what our kids are watching on their screens.
Since credibility can't be automated and lawmakers still debate whether to regulate AI at the state or federal level, it's our job to be aware so technology doesn't raise the next generation.
Because the tech is already here, the safeguards are not.