Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

Is your child watching ‘AI Slop’? The disturbing new YouTube trend parents need to see

Boy on screen.

For decades, the world followed a safe pattern: technology was invented, people used it and then — eventually — the government regulated it. With AI, at least in the U.S. that safety net has collapsed.

Today, children are interacting with AI at every turn — through chatbots, "educational" apps, toys and YouTube channels. But while the colors are bright and the songs are catchy, there is a dark side to this automated content that most parents haven't noticed yet.

The rise of 'AI slop'

(Image credit: Runway Act-1)

Investigators are sounding the alarm on a new phenomenon called AI Slop. These are mass-produced videos generated in seconds with AI by algorithms designed to do one thing: keep kids watching. As a mom of young kids, I am all too familiar with this problem since it isn't just pulling screens out of our kids' hands, it's AI-generated content flooding YouTube.

A recent report from the New York Times found AI-generated content showing characters walking into traffic or ignoring basic safety rules. Even "educational" facts that are completely made up (AI hallucinations). These surreal, disturbing imagery that blurs the line between reality and fiction.

Why this is different than "bad TV"

(Image credit: Runway AI video/Future)

I grew up in the 1980s and my parents owned video stores (long before Blockbuster existed). I definitely saw my fair share of inappropriate content, mostly by default of being in the same environment of movies like "Sixteen Candles," "Police Academy" and "Splash."
In the past, even the cheapest cartoons required a "human bottleneck." A writer had to script it; an editor had to check it. There was a layer of human judgment.

But AI has removed the human checkpoint. Now, a single user can flood a platform with thousands of videos in minutes. There is no teacher reviewing the lesson, no editor checking the message and no moral compass guiding the output. Just infinite, automated "engagement."

The 'credibility' trap

(Image credit: Runway Frames/AI image)

Children are developmentally vulnerable to AI because they can't yet distinguish between a real person and a confident-sounding AI. Plus, an authority bias exists. If a character looks like a "teacher" or a "police officer," a child assumes that what they say is true. Especially if they go from shows with a real script like Bluey and Sesame Street.

But when it comes to YouTube, shows "of value" are in the same algorithm as those created by AI. For a kid just scrolling, that line is blurred. AI doesn't optimize for accuracy; it optimizes for the "click." If a dangerous video performs well, the system will show it to millions more.

The takeaway

As parents, we are already juggling a lot. I'm guilty of handing my kid a tablet and saying, "Just give me 15 minutes." I could parent without screentime, but I could also churn my own butter. Neither of which is happening. But this is a wake up call to at least know what our kids are watching on their screens.

Since credibility can't be automated and lawmakers still debate whether to regulate AI at the state or federal level, it's our job to be aware so technology doesn't raise the next generation.

Because the tech is already here, the safeguards are not.



More from Tom's Guide

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.