40% of Your Kid's YouTube Shorts Are AI-Generated. Nobody Told the Parents.
A NYT investigation found YouTube's algorithm floods children with bizarre, nonsensical AI videos. Experts say it could rewire how young brains learn.
Four seconds into "Old MacDonald Had a Farm," a horse with two arms and four legs hatches from an egg.
That's not a fever dream. It's a YouTube Short recommended to toddlers by the platform's algorithm — and it has millions of views.
A New York Times investigation published this week reviewed more than 1,000 videos recommended to young children on YouTube. What they found should make every parent put down their phone and pick up their kid's tablet: after watching a single CoComelon or Bluey video, more than 40 percent of recommended Shorts appeared to contain AI-generated visuals.
These aren't edge cases buried in obscure corners of the platform. They're being actively pushed by the same algorithm that serves up Ms. Rachel and "Wheels on the Bus."
What the Videos Actually Look Like
Imagine a pink elephant doing gymnastics on a tightrope next to the letter "A." Animals forming from paint squirted into water, then growing mermaid tails. Characters with warped faces and extra limbs. Garbled text. No plot. No repetition. No logic.
Each clip runs about 30 seconds. They're produced with cheap AI tools and uploaded multiple times a day. The channels market themselves as "educational" — claiming to teach toddlers and preschoolers about animals and the alphabet.
They're doing the opposite.
Why This Matters for Growing Brains
Here's where it gets serious.
Dr. Jenny Radesky, a developmental behavioral pediatrician at the University of Michigan, told the Times these videos are pure "attention capture." No meaning. No structure. Just visual noise designed to keep tiny eyes locked on screen.
"The worst case is that it's so fantastical and full of attention capture that it is going to be cognitively overloading to the child," she said.
Think about what good children's media actually does. Mister Rogers reflected the world kids already knew — helping them make sense of emotions, relationships, and cause and effect. Sesame Street taught letters through repetition, plot, and characters who behaved consistently.
AI slop does none of that. A giraffe dives into a swimming pool. Looks real enough. The pool's real. The giraffe looks real. But the scenario is impossible — and a three-year-old's brain has to burn cognitive energy trying to reconcile what they're seeing with what they know about the world.
"It may seem innocuous," said Dr. Rachel Barr, a developmental psychologist at Georgetown University. "But that is not going to help them learn about swimming or giraffes or 'G'."
The Algorithm Doesn't Care About Your Child
Here's the thing parents need to understand: YouTube's recommendation engine optimizes for watch time. That's it. It doesn't know or care whether a video teaches your kid anything. It knows that bright colors, fast cuts, and absurd visuals keep toddlers watching.
AI-generated content is perfect algorithm food. It's cheap to produce — one creator can upload dozens of videos a day. It's visually stimulating. And it generates views, which generates ad revenue, which generates more AI videos.
The Times found the same AI channels popping up across multiple test sessions. The algorithm doesn't just surface this content occasionally. It locks onto it.
Even YouTube Kids — supposedly a controlled environment — is full of these videos. The platform doesn't require AI-generated animated content aimed at children to carry any label. The entire moderation burden falls on parents.
The Displacement Problem
Long-term health studies on AI content and children don't exist yet. The stuff is too new. But experts are already worried about displacement — the idea that every minute a kid spends watching meaningless AI slop is a minute they're not reading, playing, talking to humans, or watching something that actually helps them develop.
Dr. Mitch Prinstein, a psychology professor at the University of North Carolina, watched the videos and put it simply: "These do strike me as something that are made to really get in your head. It may even be harmful, but we need more data."
McCall Booth, a developmental psychologist at Georgetown, raised a longer-term concern. Kids who grow up watching hyper-realistic but impossible scenarios might have "a harder time in the future identifying fake content because their mental schema had already adapted to include improbable, but aesthetically realistic character actions."
In other words: feed a toddler enough AI-generated nonsense, and you might be training them to accept AI-generated nonsense as normal.
This Isn't New. But the Scale Is.
Low-quality kids' content existed on YouTube long before AI tools made it easy. The "Elsagate" scandal of 2017 — when disturbing videos featuring kids' characters flooded the platform — led YouTube to create YouTube Kids in the first place.
But AI changed the economics. What used to require animators, scripts, and production time now takes minutes. The volume is exploding. And the American Academy of Pediatrics has already updated its guidance, telling parents to avoid AI-generated content and short-form video for young children entirely.
That guidance assumes parents can tell the difference. Many can't. The Times used AI detection tools to verify videos that looked convincing enough to pass casual inspection.
What's Actually Being Done
Not much.
YouTube has a "synthetic content" label, but it's applied inconsistently and doesn't cover animated AI content targeting children. The platform told the Times it's "committed to providing a safe and responsible experience" — the kind of statement that means almost nothing.
The EU's Digital Services Act requires platforms to protect minors and be transparent about algorithmic recommendations. Australia banned under-16s from social media entirely last year. The UK's Online Safety Act puts new duties on platforms to protect children.
In the US? The social media addiction trial currently underway in Los Angeles — where Meta faces 1,600+ lawsuits over youth mental health — might eventually force change. But "eventually" doesn't help the three-year-old watching a horse hatch from an egg right now.
The Bigger Picture
This story sits at the intersection of everything the attention economy has been building toward. Algorithms that optimize for engagement over meaning. AI tools that make content creation nearly free. Platforms that outsource responsibility to users. And the most vulnerable audience imaginable — children whose brains are still learning how to tell real from fake.
The question isn't whether AI-generated kids' content is bad. It's whether anyone with the power to fix it will act before a generation of children grows up thinking horses hatch from eggs.
The American Academy of Pediatrics says avoid it. Developmental psychologists say it's cognitively overloading. YouTube's algorithm says watch more.
Guess which one wins.
Keep Reading
Explore Perspectives
Get this delivered free every morning
The daily briefing with perspectives from 7 regions — straight to your inbox.