YouTube Expands DeepFakes Prevention Program

In support of two controversial Congressional bills that address the management of AI-generated content featuring unauthorized replicas of notable people online, YouTube is expanding its suite of related deepfake tools and tech for “some of the world’s most influential figures.”

“As AI continues to evolve, we're committed to ensuring it's used responsibly, especially when it comes to protecting creators and viewers,” YouTube announced on Wednesday, stating its active support of the No Fakes Act of 2025, a bill introduced by Senator Chris Coons (D-DE) and Senator Marsha Blackburn (R-TN).

The No Fakes Act -- along with the Take It Down Act that YouTube also supports -- aim to tackle the increased spread of unauthorized digital replicas, such as the fraudulent ad in which an AI-manipulated Tom Hanks sells a sketchy “cure” for type 2 diabetes.

advertisement

advertisement

The No Fakes Act would allow individual users to notify platforms about bad actors who are creating, posting or profiting from unauthorized digital copies of them -- ultimately removing platforms’ liability if they take down the content.

The Take It Down Act would make it a crime to publish non-consensual intimate images, including AI-generated deepfakes, and require that social media apps enforce processes to immediately remove the reported images.

In a press release, YouTube's Vice President of Public Policy Leslie Miller says the No Fakes Act is consistent with its ongoing efforts as a company to protect creators and viewers, and has worked with Senators Coon and Blackburn, as well as creative agencies -- including the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA) -- to “push for a shared consensus on this legislation.”

Coons and Blackburn will be announcing the reintroduction of the No Fakes Act at a press conference on Wednesday.

Different versions of the No Fakes Act have been introduced in 2023 and 2024, with previous drafts receiving criticism from civil liberties groups. While YouTube believes the act “focuses on the best way to balance protection with innovation: putting power directly in the hands of individuals to notify platforms of AI-generated likenesses they believe should come down,” others believe it to be a crude, vague and slapdash approach to the issue.

“This bill is like using a machete for open-heart surgery,” said Adam Eisgrau, Senior Director of AI, Creativity, and Copyright Policy at tech coalition Chamber of Progress. " 'Overly-broad' is an understatement. The NO FAKES Act could mire companies, artists, and platforms in complex legal liability and litigation to the detriment of speech and innovation."

With regard to the Take it Down act, Director of Fight for the Future Evan Greer acknowledges that the bill attempts to solve a real problem but “would do more harm than good” because it “threatens encryption & free expression, at a time when we need privacy tools more than ever.”

Despite the criticism, YouTube remains supportive of these acts, stating that AI is a powerful tool with the “potential to revolutionize creative expression,” but comes with “risks” that platforms have a responsibility to address in a proactive manner.

This is why the company is also expanding the pilot of the “likeness management technology” it introduced last year in partnership with the Creative Artists Agency (CAA). According to YouTube, the program allows celebrities and creators to personally request AI copies of them be removed from the platform.

YouTube has selected famed creators MrBeast, Mark Rober and Marques Brownlee as the program's initial participants, who will help the platform scale and refine the automated deepfake detection technology.

The Google-owned company has also decided to update its privacy processes to align with the previously mentioned deepfake bills, now allowing users to request the removal of synthetic content depicting their voice and image.

Next story loading loading..