YouTube is widening access to its likeness detection system so that celebrities and their representatives can search for AI-generated videos that use their face or voice and, where appropriate, ask for them to be removed, according to the company and reporting by TechCrunch. The move marks the latest extension of a tool that YouTube has been building to help public figures respond to deepfakes and other synthetic content made without their consent.

The platform says the feature is intended for talent agencies, management firms and the people they represent, even in cases where the celebrity does not run a YouTube channel. YouTube says the system works in a similar way to Content ID, its long-standing copyright enforcement product, by identifying material that appears to use a protected likeness and routing requests through a review process.

That review remains important: requests to remove AI-generated clips are not automatic and will still be checked against YouTube’s policies, as CNET noted in reporting on the service’s earlier rollout. The company first introduced likeness detection for a limited group of creators in 2025, then expanded it to more monetised users and, in March 2026, to a pilot group that included government officials, political candidates and journalists, according to TechCrunch.

The latest expansion is backed by some of Hollywood’s most influential agencies. YouTube said it has worked with CAA, UTA, WME and Untitled Management to refine the tool, suggesting the company sees the entertainment industry as a major battleground in the wider fight over synthetic media. YouTube has also publicly supported the NO FAKES Act, which would target unauthorised AI recreations of a person’s voice or appearance, underscoring how the platform is positioning its own tools alongside broader efforts to regulate deepfakes.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services