YouTube cracks down on synthetic media with AI disclosure requirement

Key Takeaways:

– YouTube will implement stricter measures on realistic AI-generated content hosted on the platform.
– Creators will be required to disclose when they have created altered or synthetic content using AI tools.
– YouTube will provide new options for creators to indicate if their content includes realistic AI-generated or AI-altered material.
– YouTube will introduce a new labeling system to inform viewers about the nature of the content they are watching.
– Content created by YouTube’s own generative AI products will be automatically labeled as altered or synthetic.
– Creators who do not disclose their use of AI may face penalties such as content removal or suspension from the YouTube Partner Program.
– YouTube will deploy AI-powered content moderation tools to enhance the identification and handling of content that violates the new rules.
– Individuals can request the removal of AI-generated content that simulates identifiable individuals through a privacy request process.
– YouTube will introduce a policy for artists or music publishers to request the removal of AI-generated music that mimics an artist’s voice.
– YouTube aims to balance the new applications of AI with community safety efforts and collaborate with creators and artists to build a future that benefits all.

Ars Technica:

On Tuesday, YouTube announced it will soon implement stricter measures on realistic AI-generated content hosted by the service. “We’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” the company wrote in a statement. The changes will roll out over the coming months and into next year.

The move by YouTube comes as part of a series of efforts by the platform to address challenges posed by generative AI in content creation, including deepfakes, voice cloning, and disinformation. When creators upload content, YouTube will provide new options to indicate if the content includes realistic AI-generated or AI-altered material. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do,” YouTube writes.

In the detailed announcement, Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, explained that the policy update aims to maintain a positive ecosystem in the face of generative AI. “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube,” they write. “We have long-standing policies that prohibit technically manipulated content that misleads viewers … However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”

YouTube will also introduce a new labeling system on the platform that will inform viewers about the nature of the content they are watching. For instance, a new label will be added to the description panel and video player for content that has been altered or is synthetic, especially when discussing sensitive topics like “elections, ongoing conflicts and public health crises, or public officials,” the company says.

Also, content created by YouTube’s own generative AI products, such as AI-powered video creator Dream Screen, will be automatically labeled as altered or synthetic. The company shared three mock-ups of what these labels may look like, although they may change over time.

Creators who choose to avoid AI-use disclosure may be subject to penalties, including content removal or suspension from the YouTube Partner Program. Further, YouTube is planning to deploy AI-powered content moderation tools that aim to enhance the speed and accuracy of identifying and handling content that violates the new rules.

Response to deepfake and artist imitation concerns

YouTube also announced plans to allow individuals to request the removal of AI-generated content that simulates identifiable individuals, including their faces or voices, such as deepfakes, through a privacy request process. “Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests,” they write. “This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”

Along those lines, YouTube will also introduce a policy for artists or music publishers to request the removal of AI-generated music that mimics an artist’s unique singing or rapping voice. Like the privacy requests, potential take-downs will consider whether the content is part of news reporting, analysis, or critique of the synthetic vocals, the company says.

With the needs of parody, fair use, and political commentary in mind, YouTube says it is attempting to balance new applications of AI with its community safety efforts. “We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI,” they write. “We’ll work hand-in-hand with creators, artists, and others across the creative industries to build a future that benefits us all.”

Source link

AI Eclipse TLDR:

YouTube has announced that it will implement stricter measures on realistic AI-generated content hosted on its platform. Creators will be required to disclose when they have created altered or synthetic content that is realistic, including the use of AI tools. The changes will be rolled out gradually over the coming months and into next year. This move by YouTube is part of its efforts to address challenges posed by generative AI, such as deepfakes, voice cloning, and disinformation. The platform will provide new options for creators to indicate if their content includes AI-generated or AI-altered material. Additionally, YouTube will introduce a new labeling system to inform viewers about the nature of the content they are watching, especially when it involves sensitive topics like elections or public health crises. Content created by YouTube’s own generative AI products will be automatically labeled as altered or synthetic. Creators who choose to avoid disclosing their use of AI may face penalties, including content removal or suspension from the YouTube Partner Program. YouTube also plans to deploy AI-powered content moderation tools to enhance the speed and accuracy of identifying and handling content that violates the new rules. The platform will allow individuals to request the removal of AI-generated content that simulates identifiable individuals, including their faces or voices, through a privacy request process. YouTube will consider various factors when evaluating these requests, such as whether the content is parody or satire. The platform will also introduce a policy for artists or music publishers to request the removal of AI-generated music that imitates an artist’s unique singing or rapping voice. YouTube aims to balance the new applications of AI with its community safety efforts, taking into account the needs of parody, fair use, and political commentary. The platform intends to work with creators, artists, and others in the creative industries to build a future that benefits everyone.