YouTube formally announced Tuesday that it will require creators to disclose when they in any way alter or create synthetic content that is realistic and uploaded to the site.
Music will have its own safeguards. This includes when creators use tools based on artificial intelligence (AI).
Earlier this month, Google said it would begin using SynthID watermarks for all creative assets generated by AI in Performance Max and Google Ads, as the company continues to build out features and tools that automate processes.
YouTube altered or synthetic content now will require a label. When uploading the content, creators will select an option that indicates whether or not the images or the videos are real, altered, or synthetic. The change will roll out in the coming months.
Jennifer Flannery O’Connor, and Emily Moxley, vice presidents of content at YouTube, provided an example in a blog post: “This could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do.”
YouTube plans to enforce the change through adding labels to the creator’s content. Those who choose not to disclose this information may have their content removed, suspended from the platform and YouTube Partner Program, or face other penalties, O’Connor and Moxley explained.
When the content is synthetic, YouTube will identify it in one of two ways. The company will place a label to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics, YouTube will add a more prominent label to the video player.
When a label is not sufficient to mitigate the risk of harm, some synthetic media -- regardless of whether it is labeled -- will be removed from YouTube platform if it violates the Community Guidelines. For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.
YouTube’s Music partners also will have the ability to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice. These removal requests will become available to labels or distributors who represent artists participating in YouTube’s early AI music experiments.
The company said it will remove AI-generated or other synthetic or altered content that simulates an identifiable individual such as their face or voice, using its privacy request process. This includes digitally generated without their permission or to misrepresent their points of view.
The move comes just weeks after U.S. President Joe Biden signed an executive order on AI that seeks to create an early set of guardrails to balance cutting-edge technology companies with national security and consumer rights.
On Tuesday, Biden and Chinese leader Xi Jinping are expected to sign a deal limiting the use of AI in drones, nuclear weapon control systems, and more in San Francisco this morning.