– Meta plans to flag AI-generated images on Facebook, Instagram, and Threads to maintain transparency.
– Currently, Meta labels content created by its Imagine AI engine with a visible watermark, and it will extend this labeling to third-party sources like OpenAI and Google.
– The exact design of the labels is unknown, but it may consist of the words “AI Info” next to generated content.
– Meta is also working on tools to identify invisible markers in third-party generated images, similar to what Imagine AI does with embedding watermarks in metadata.
– However, there is currently no effective way to detect AI-generated audio and video at the same level as images.
– Meta will rely on users to disclose if their AI-generated video or audio content was produced or edited by artificial intelligence and failure to do so may result in penalties.
– Meta is also developing a new type of watermarking tech called Stable Signature to prevent the removal of invisible markers from AI-generated content metadata.
– Additionally, Meta is training AI models on Community Standards to help identify content that violates policies.
– The social media labels are expected to roll out in the coming months, potentially in anticipation of the 2024 election year.
– Further details on penalties and visible watermarks for third-party sourced images are yet to be disclosed.
The tech giant already labels content made by its Imagine AI engine with a visible watermark. Moving forward, it’s going to do something similar for pictures coming from third-party sources like OpenAI, Google, and Midjourney just to name a few. It’s unknown exactly what these labels will look like although, looking at the announcement post, it may simply consist of the words “AI Info” next to generated content. Meta states this design is not final, hinting that it could change once the update officially launches.
In addition to visible labels, the company says it’s also working on tools to “identify invisible markers” in images from third-party generators. Imagine AI does this too by embedding watermarks into the metadata of its content. Its purpose is to include a unique tag that cannot be manipulated by editing tools. Meta states other platforms have plans to do the same and want a system in place to detect the tagged metadata.
Audio and video labeling
So far, everything has centered around branding images, but what about AI-generated audio and video? Google’s Lumiere is capable of creating incredibly realistic clips and OpenAI is working on implementing video-creation to ChatGPT. Is there something in place to detect more complex forms of AI content? Well, sort of.
Meta admits there is currently no way for it to detect AI-generated audio and video at the same level as images. The technology just isn’t there yet. However, the industry is working “towards this capability”. Until then, the company is going to rely on the honor system. It’ll require users to disclose if the video clip or audio file they want to upload was produced or edited by artificial intelligence. Failure to do so will result in a “penalty”. What’s more, if a piece of media is so realistic that it runs the risk of tricking the public, Meta will attach “a more prominent label” offering important details.
As for its own platforms, Meta is working on improving first-party tools as well.
The company’s AI Research lab FAIR is developing a new type of watermarking tech called Stable Signature. Apparently, it’s possible to remove the invisible markers from the metadata of AI-generated content. Stable Signature is supposed to stop that by making watermarks an integral part of the “image generation process”. On top of all this, Meta has begun training several LLMs (Large Language Models) on their Community Standards so the AIs can determine if certain pieces of content violate the policy.
Expect to see the social media labels rolling out within the coming months. The timing of the release should come as no surprise: 2024 is a major election year for many countries, most notably the United States. Meta is seeking to mitigate misinformation from spreading on its platforms as much as possible.
We reached out to the company for more information on what kind of penalties a user may face if they don’t adequately mark their post and if it plans on marking images from a third-party source with a visible watermark. This story will be updated at a later time.
Until then, check out TechRadar’s list of the best AI image generators for 2024.
You might also like
AI Eclipse TLDR:
Meta, the parent company of Facebook, Instagram, and Threads, has announced that it will begin flagging AI-generated images on its platforms in an effort to promote online transparency. Currently, content created by Meta’s Imagine AI engine is labeled with a visible watermark, and the company plans to do something similar for pictures from third-party sources like OpenAI and Google. The design of these labels is not final and may change before the update is launched. Meta is also developing tools to identify invisible markers in images from third-party generators, similar to how Imagine AI embeds watermarks in the metadata of its content. Meta states that other platforms have similar plans to detect tagged metadata.
However, when it comes to AI-generated audio and video, Meta admits that it currently lacks the capability to detect them at the same level as images. The company is working towards developing this capability, but until then, it will rely on users to disclose if their uploaded video or audio file was produced or edited by AI. Failure to do so may result in a penalty. If a piece of media is highly realistic and could potentially deceive the public, Meta will attach a more prominent label with important details.
Meta is also working on improving its own first-party tools. Its AI Research lab FAIR is developing a new type of watermarking technology called Stable Signature, which aims to prevent the removal of invisible markers from the metadata of AI-generated content. Additionally, Meta has started training several Large Language Models (LLMs) on its Community Standards to help AI determine if certain content violates the policy.
The social media labels are expected to roll out in the coming months, with Meta aiming to mitigate the spread of misinformation on its platforms, especially in the lead-up to the major election year of 2024. The company has not provided details on the penalties users may face for not adequately marking their posts or whether images from third-party sources will be marked with visible watermarks.