YouTube is bolstering its defenses against AI-generated disinformation by extending its likeness detection tool to a new group: government officials, journalists and political candidates. The move, announced Tuesday, aims to support these public figures identify and potentially remove deepfakes – AI-created videos convincingly mimicking their appearance – from the platform. This expansion comes roughly six months after YouTube initially rolled out the technology to a select group of creators within the YouTube Partner Program.
The proliferation of increasingly realistic AI video presents a growing challenge for online platforms. While YouTube generally embraces AI video creation, the company acknowledges the potential for deceptive content to spread misinformation and facilitate scams. Deepfakes, in particular, have proven potent tools for malicious actors seeking to manipulate public perception, especially when targeting high-profile individuals. The company’s efforts reflect a broader industry concern about the risks associated with rapidly advancing artificial intelligence.
The new tool operates by allowing eligible individuals to upload a video of themselves, along with government identification, to establish a baseline for likeness detection. YouTube will then proactively notify participants via YouTube Studio when videos are flagged as potentially containing a deepfake of their likeness. Though, detection doesn’t automatically guarantee removal. According to YouTube, the platform will carefully evaluate each flagged video, taking into account its existing policies regarding free expression, parody, and political critique. “YouTube has a long history of protecting free expression and content in the public interest—including preserving content like parody and satire, even when used to critique world leaders or influential figures,” the company stated in a blog post.
This careful approach highlights the complex balancing act YouTube faces. Determining the line between legitimate satire and harmful disinformation is a significant challenge, and the platform’s decisions in this area will likely face scrutiny. As Axios reported, the expansion is driven by the exacerbating problem of deepfakes due to new AI systems.
Beyond simply removing deepfakes, YouTube is also exploring potential future revenue models for creators whose likenesses are used in AI-generated content. Rene Ritchie, YouTube’s Creator Liaison, suggested the possibility of a system similar to Content ID – YouTube’s existing copyright detection system – that could allow creators to monetize AI versions of themselves. “Right now, YouTube’s absolute priority is on safety and protection, but YouTube is exploring similar future paths for likeness that could open up entirely new revenue opportunities for creators and artists to manage, authorize, and benefit from AI likeness,” Ritchie said in a video accompanying the announcement. This concept aligns with ongoing discussions within the entertainment industry, including negotiations between SAG-AFTRA and AMPTP regarding royalties for AI-generated performances, as NBC News noted.
The rollout of the expanded tool is currently being initiated by YouTube, which will reach out to eligible politicians and journalists on the platform. Participants will then have the option to enroll and utilize the likeness detection capabilities. The technology initially launched in October 2025 to YouTube Partner Program members, demonstrating a phased approach to deployment. TechCrunch highlighted that YouTube aims to balance user free expression with the risks of AI impersonation.
As AI technology continues to evolve, the fight against deepfakes and disinformation will undoubtedly intensify. YouTube’s latest move represents a proactive step in addressing this challenge, but the effectiveness of the tool will depend on its ability to accurately identify deepfakes while respecting the principles of free expression. The platform’s exploration of revenue-sharing models for AI likenesses also suggests a potential path toward a more sustainable and equitable future for creators in the age of artificial intelligence.
The coming months will be crucial in assessing the impact of this expanded tool and determining whether it can effectively mitigate the risks posed by deepfakes. Further developments in AI detection technology, coupled with ongoing policy discussions, will shape the future of online content moderation and the protection of individual identities in the digital realm.
What are your thoughts on YouTube’s new deepfake detection tool? Share your comments below.