Written by 2:34 pm Tech Views: 1

YouTube Launches Innovative Likeness-Detection Tech to Protect Creators from AI Misuse

YouTube Launches Innovative Likeness-Detection Tech to Protect Creators from AI Misuse

YouTube Launches Likeness-Detection Technology to Combat AI-Generated Misuse

By Lauren Forristal, TechCrunch — October 21, 2025

YouTube has officially rolled out its much-anticipated likeness-detection technology to eligible creators within the YouTube Partner Program, following a successful pilot phase earlier this year. The new tool is designed to help creators identify and manage AI-generated content that uses their face, voice, or overall likeness without consent.

Protecting Creators from AI Misuse

The rise of AI-generated media has raised growing concerns about the unauthorized use of individuals’ likenesses—often manipulated to endorse products, spread misinformation, or generate deceptive content. YouTube’s new technology aims to address these concerns by detecting videos featuring AI-generated replicas of creators and providing options to remove such content.

YouTube’s spokesperson shared with TechCrunch that today marks the first wave of this rollout, with eligible creators having received invitations via email to access the tool. The technology scans videos to identify instances where a creator’s likeness—including facial features and vocal patterns—has been synthesized or manipulated.

How Creators Can Access and Use the Tool

Instructions published on YouTube’s Creator Insider channel guide creators through the enrollment process. To get started, creators must:

  • Navigate to the “Likeness” tab in their YouTube Studio dashboard.
  • Consent to data processing required for likeness detection.
  • Use their smartphone to scan a QR code presented on-screen.
  • Complete identity verification by uploading a photo ID and a short selfie video on a secure webpage.

Once verified, creators gain access to a dashboard displaying all videos flagged for using their likeness. They can request removal of these videos in line with YouTube’s privacy policies or use copyright claims if applicable. There is also an option to archive detected videos for record-keeping. Creators retain full control and can opt out of the system at any time, after which YouTube will stop scanning videos associated with their likeness within 24 hours.

Industry and Legislative Context

YouTube initially announced the development of this technology last year through a partnership with Creative Artists Agency (CAA), aiming to support celebrities, athletes, and high-profile creators in safeguarding their digital identity. The platform has emphasized the importance of combating deceptive AI replicas, a concern that has gained urgency as sophisticated deepfake and voice cloning technologies become more accessible.

In April 2025, YouTube publicly endorsed the NO FAKES Act, pending legislation seeking to curb harmful AI-generated impersonations that could deceive the public or cause reputational damage.

Addressing Real-World Problems

Instances of AI misuse abound. For example, the tech company Elecrow previously used an AI-generated voice clone of popular YouTuber Jeff Geerling to promote its products without authorization, highlighting the potential for exploitation on digital platforms.

With this launch, YouTube positions itself at the forefront of digital identity protection in the age of AI while equipping creators with tools to regain agency over their image and voice.


About the Author:
Lauren Forristal covers media, streaming, apps, and platform innovations for TechCrunch. She can be reached at laurenf.techcrunch@gmail.com or via Signal at laurenforris22.25. —

For more insights on AI and digital security, visit TechCrunch’s AI topic page.

Visited 1 times, 1 visit(s) today
Close