YouTubers face penalties if they use generative AI — unless they comply with this new rule

Key Takeaways:

– YouTube will require creators to disclose if a video was made with generative AI.
– Failure to consistently disclose this information may result in penalties such as content removal or suspension from the YouTube Partner Program.
– Artists and creators will have the ability to request the removal of content that uses their likeness without consent.
– The availability of generative AI has increased the risk of deepfakes and misinformation, especially during the upcoming presidential election.
– OpenAI is developing a tool to detect if an image was created with its AI generator.
– Meta has implemented a policy requiring political advertisers to disclose the use of generative AI.
– Creators uploading videos will have the option to indicate if it contains realistic altered or synthetic material.
– Labels will be added to the description panel to inform viewers of AI-generated or altered content.
– Content involving sensitive topics will have a more prominent label.
– AI technology will be used to enforce content moderation and detect violations of community guidelines.

Mashable:

YouTube will soon require creators to disclose whether a video was made with generative AI.

On Tuesday, the video streaming giant announced this, and other updates, to mitigate the misleading or harmful effects of generative AI.

“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material,” said Jennifer Flannery O’Connor and Emily Moxley, YouTube product management VPs.

What YouTube labels indicating AI-generated content will look like.
Credit: YouTube

Creators who fail to consistently do this might face penalties, such as content removal or suspension from the YouTube Partner Program. The announcement also said artists and creators will be able to request the removal of content (including music) that uses their likeness without consent.

The widespread availability of generative AI has heightened the threat of deepfakes and misinformation, especially with the upcoming presidential election. Both the public and private sector have acknowledged a need to detect and prevent the nefarious use of generative AI.

For example, President Biden’s AI executive order specifically addressed the need for labeling or watermarking AI-generated content. OpenAI is working on its own tool, a “provenance classifier,” that detects whether an image was made with its DALL-E 3 AI generator. Just last week, Meta announced a new policy that requires political advertisers to disclose whether an ad uses generative AI.

On YouTube, when a creator uploads a video, they’ll be given the option of indicating whether it “contains realistic altered or synthetic material,” the blog post said. “For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

Labels informing viewers that a video has AI-generated or altered content will be added to the description panel. A “more prominent label” will be added to content involving sensitive topics. Even if AI-generated content is appropriately labeled, if it violates YouTube’s community guidelines, it will be taken down.

How will all of this content moderation be enforced? By AI of course. In addition to creating fake content that looks convincingly real, generative AI can also successfully identify and catch content that violates content policies. YouTube will be deploying generative AI technology to help contextualize and understand threats at scale.


Source link

AI Eclipse TLDR:

YouTube will soon require creators to disclose whether a video was made with generative AI in an effort to address the misleading or harmful effects of such content. Creators will have new options to indicate if their content contains realistic altered or synthetic material. Failure to consistently disclose this information may result in penalties, such as content removal or suspension from the YouTube Partner Program. Additionally, artists and creators will be able to request the removal of content that uses their likeness without consent. The availability of generative AI has increased the risk of deepfakes and misinformation, especially during the upcoming presidential election. Both the public and private sectors recognize the need to detect and prevent the malicious use of generative AI. YouTube will employ generative AI technology to help identify and understand threats at scale.