Home » Technology » Keeping Kids Safe with Generative AI: Google’s Approach

Keeping Kids Safe with Generative AI: Google’s Approach

by Sophie Lin - Technology Editor

Google is prioritizing the safety of young people as generative artificial intelligence tools become increasingly prevalent, outlining a multi-faceted approach to mitigate risks and foster responsible innovation. The company detailed its strategy at the “Growing Up in the Digital Age” Summit in Dublin on March 11, emphasizing a commitment to building AI experiences that are high-quality, privacy-protective, and age-appropriate. This push comes as generative AI unlocks new opportunities for learning, creativity, and connection, but as well presents unique challenges for younger users.

The core of Google’s plan rests on three pillars: protecting youth online, respecting family dynamics around technology, and empowering young people to safely explore the digital world. Christy Abizaid, VP of Trust & Safety, Global Policy & Standards, articulated this vision in a keynote address, signaling a proactive stance toward safeguarding children and teens in the evolving landscape of AI. The focus is on embedding safety measures throughout the entire development lifecycle of these tools, rather than relying solely on reactive measures.

Building Proactive Protections into AI Systems

For over two decades, Google has leveraged AI within its products, and its safety approach has evolved alongside the technology. The company’s policies explicitly prohibit the use of generative AI for creating content related to child sexual abuse, violent extremism, self-harm, and non-consensual intimate imagery. Restrictions also extend to age-inappropriate content, including material promoting disordered eating or dangerous exercise. These aren’t simply after-the-fact responses; safeguards are strategically implemented from the moment a user interacts with the system to the final output generated.

Google employs specific classifiers to detect potentially harmful queries related to child safety, preventing the generation of inappropriate responses. These checks identify known child sexual abuse material (CSAM) and assess whether a query violates established policies, triggering either a block or a safer alternative. Recent evaluations have demonstrated improvements in Google’s Gemini 3 model, specifically in reducing “sycophancy” – the tendency to overly agree with prompts – resisting “prompt injections” (attempts to manipulate the AI’s behavior), and enhancing protection against cyber misuse.

Rigorous Testing and Persona Protections

Ensuring the effectiveness of these safeguards requires continuous testing and consultation with experts. Google’s Content Adversarial Red Team (CART) completed over 350 exercises in 2025, spanning text, audio, images, video, and complex AI capabilities, to uncover vulnerabilities. These safeguards are developed by Google’s in-house specialists in collaboration with third-party child development experts, combining technical expertise with an understanding of child psychology.

Recognizing the potential for young users to form emotional connections with AI systems, Google has implemented “persona protections” to prevent harmful interactions. These protections prohibit the AI from claiming sentience, simulating romantic relationships, or role-playing as harmful characters. Google also joined other technology companies in committing to Thorn’s Safety by Design principles, which focus on preventing AI-facilitated child sexual abuse and exploitation.

Empowering Youth Through AI Literacy and Education

Beyond preventing harm, Google aims to empower young users to benefit from generative AI. The company is promoting AI literacy through resources like the “Five Must-Knows for Getting Started with AI” video and a Family AI Conversation Guide, encouraging open dialogue between parents and children.

To support learning both in and out of the classroom, Google launched Guided Learning in Gemini, a tool designed to help students understand complex topics by breaking down problems and adapting explanations to their individual needs. This conversational learning aid helps younger users find relevant resources while utilizing proven learning techniques.

As generative AI continues to evolve, Google remains committed to a responsible approach, continually refining its policies, safeguards, and tools to deliver safer experiences for younger users. The company’s ongoing operate underscores the importance of balancing innovation with the need to protect vulnerable populations in the digital age.

The development and implementation of these safety measures will undoubtedly be an ongoing process, requiring continuous adaptation and collaboration between technology companies, experts, and families. Further developments in AI safety protocols and their effectiveness will be closely watched as generative AI becomes increasingly integrated into daily life.

What are your thoughts on the role of tech companies in safeguarding young users online? Share your comments below.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.