YouTube has announced that its new technology for detecting likenesses is now available to eligible creators in the YouTube Partner Program.
This follows a trial phase, and creators received notifications about this rollout early on Tuesday. The technology enables creators to ask for the removal of AI-generated content that features their likeness.

This initial rollout is just the beginning, according to a spokesperson from YouTube. The system is designed to recognize and manage AI content that includes creators’ faces and voices.
Its main purpose is to safeguard individuals from having their likeness exploited, whether for promoting products they haven’t endorsed or for spreading false information. There have been numerous instances of misuse, such as a company using an AI version of YouTuber Jeff Geerling’s voice to market their products.

YouTube shared visuals on its Creator Insider channel, showing how creators can utilize this new feature. To start, creators must navigate to the “Likeness” tab, agree to data processing, and scan a QR code with their smartphone. This will lead them to a webpage for verifying their identity, which requires a photo ID and a short video selfie.
Once creators are granted access to the tool, they can see all the videos that have been detected and can request removal in line with YouTube’s privacy policies or submit a copyright claim. Additionally, they have the option to archive the video.

Creators can choose to stop using this technology whenever they wish, and YouTube will cease scanning for their likeness in videos within 24 hours of their opt-out.
The likeness-detection feature has been in a testing phase since earlier this year. YouTube first revealed plans to collaborate with Creative Artists Agency (CAA) last year to assist celebrities and creators in identifying content that uses their AI likeness on the platform.
In April, YouTube showed support for the NO FAKES Act, a proposed legislation aimed at addressing the problem of AI-generated replicas that imitate a person’s image or voice, which can mislead others and create harmful content.
Other Stories You May Like