YouTube continues to grapple with the proliferation of AI-generated content, often described as “AI slop,” and the challenges of ensuring a safe viewing experience, particularly for children. Now, Google’s decision to invest $1 million in Animaj, an AI-powered children’s entertainment company, is facing backlash from child safety advocates who argue the move could exacerbate the problem. The investment, announced on March 4, is intended to bolster the creation of AI-generated content for young audiences, a strategy critics say prioritizes profit over child development.
The core concern centers around the potential for AI-generated videos to be overly stimulating, lacking in educational value, and ultimately detrimental to the healthy development of young children. Although Google acknowledges the presence of low-quality content on its platform and has taken steps to demonetize some accounts, advocates argue that a more proactive approach is needed to protect vulnerable viewers. This latest investment, they contend, signals a troubling willingness to embrace the very technology contributing to the issue of AI slop on YouTube.
Animaj and Google’s AI Future Fund
Animaj, described as a “next-generation media company” building the future of kids’ entertainment, will receive exclusive access to Google’s generative AI tools, including Veo and Imagine, as part of the deal. The company aims to scale existing intellectual property (IP) like Pocoyo and Ubisoft’s Rabbids, delivering content “wherever kids are, whenever they want it.” According to Bloomberg, Animaj’s affiliated YouTube channels accumulated over 22 billion views in 2025. The company’s co-founder, Sixte de Vauplane, envisions Animaj as a proof of concept for high-quality, feature-length films powered by AI.
Concerns Over “Mesmerizing” Content and Child Development
Rachel Franz, director of Fairplay for Kids’ Young Children Thrive Offline program, sharply criticized Google’s investment, stating, “It’s not unlike Google to try to deflect attention from the real issue: AI slop is rampant on YouTube and YouTube Kids, which puts developing children at risk of harm.” Franz argues that the focus should be on removing existing harmful content rather than investing in more of it. She points to research indicating that any screen time can have adverse effects on children under the age of two, and that content designed to simply “mesmerize” children displaces crucial time needed for play, socialization, and sensory exploration.
The American Academy of Pediatrics too cautions against AI-generated content, recommending that parents prioritize longer-form videos and evidence-based educational programming. Experts warn that the design of YouTube itself, with features like endless scrolling and algorithm-driven recommendations, is developmentally inappropriate for young children. The concern isn’t limited to AI-generated content; even popular, human-created shows like CocoMelon have been criticized for their potentially overstimulating nature.
The Risk of Normalizing AI-Generated Content
Franz expressed particular concern that Google’s investment in Animaj, and channels like Hey Kids (which has over 4 million subscribers), effectively equates to “investing in harming babies.” She worries that normalizing AI-generated content will further supercharge an industry that often prioritizes engagement over educational value. A recent analysis by The New York Times found thousands of examples of AI slop targeting young viewers, some of which violated YouTube’s child safety policies. The Times also noted that YouTube does not currently require AI labeling on animated videos.
Jon Silber, director of Google’s AI Futures Fund, described Animaj as presenting a “blueprint for the future,” stating that “getting this right for the next generation is a huge priority” for Google. Whereas, Franz remains skeptical, arguing that until YouTube addresses the fundamental flaws in its platform, no amount of “good content” will be enough to mitigate the risks to young viewers. “If YouTube wants to try to make good content, fine. But they need to fix their platform. Until that happens, no child is truly going to benefit,” she said.
The debate over AI-generated content for children highlights the complex challenges of balancing innovation with the need to protect vulnerable populations. As AI technology continues to evolve, ongoing scrutiny and proactive measures will be crucial to ensure a safe and enriching online experience for all children.
If you are concerned about the impact of screen time on children, resources are available from organizations like Fairplay for Kids (https://www.fairplayforkids.org/) and the American Academy of Pediatrics (https://www.aap.org/).
What steps will Google seize to address the concerns raised by child safety advocates? Share your thoughts in the comments below.