Facebook and Instagram will label fake AI images to stop misinfo from spreading

Key Takeaways:

– Meta plans to flag AI-generated images on Facebook, Instagram, and Threads to maintain transparency.
– Currently, Meta labels content created by its Imagine AI engine with a visible watermark, and it will extend this labeling to third-party sources like OpenAI and Google.
– The exact design of the labels is unknown, but it may consist of the words “AI Info” next to generated content.
– Meta is also working on tools to identify invisible markers in third-party generated images, similar to what Imagine AI does with embedding watermarks in metadata.
– However, there is currently no effective way to detect AI-generated audio and video at the same level as images.
– Meta will rely on users to disclose if their AI-generated video or audio content was produced or edited by artificial intelligence and failure to do so may result in penalties.
– Meta is also developing a new type of watermarking tech called Stable Signature to prevent the removal of invisible markers from AI-generated content metadata.
– Additionally, Meta is training AI models on Community Standards to help identify content that violates policies.
– The social media labels are expected to roll out in the coming months, potentially in anticipation of the 2024 election year.
– Further details on penalties and visible watermarks for third-party sourced images are yet to be disclosed.

TechRadar:

Meta will begin flagging AI-generated images on Facebook, Instagram, and Threads in an effort to uphold online transparency.

The tech giant already labels content made by its Imagine AI engine with a visible watermark. Moving forward, it’s going to do something similar for pictures coming from third-party sources like OpenAI, Google, and Midjourney just to name a few. It’s unknown exactly what these labels will look like although, looking at the announcement post, it may simply consist of the words “AI Info” next to generated content. Meta states this design is not final, hinting that it could change once the update officially launches.

(Image credit: Meta)

In addition to visible labels, the company says it’s also working on tools to “identify invisible markers” in images from third-party generators. Imagine AI does this too by embedding watermarks into the metadata of its content. Its purpose is to include a unique tag that cannot be manipulated by editing tools. Meta states other platforms have plans to do the same and want a system in place to detect the tagged metadata.

Audio and video labeling

Source link

AI Eclipse TLDR:

Meta, the parent company of Facebook, Instagram, and Threads, has announced that it will begin flagging AI-generated images on its platforms in an effort to promote online transparency. Currently, content created by Meta’s Imagine AI engine is labeled with a visible watermark, and the company plans to do something similar for pictures from third-party sources like OpenAI and Google. The design of these labels is not final and may change before the update is launched. Meta is also developing tools to identify invisible markers in images from third-party generators, similar to how Imagine AI embeds watermarks in the metadata of its content. Meta states that other platforms have similar plans to detect tagged metadata.

However, when it comes to AI-generated audio and video, Meta admits that it currently lacks the capability to detect them at the same level as images. The company is working towards developing this capability, but until then, it will rely on users to disclose if their uploaded video or audio file was produced or edited by AI. Failure to do so may result in a penalty. If a piece of media is highly realistic and could potentially deceive the public, Meta will attach a more prominent label with important details.

Meta is also working on improving its own first-party tools. Its AI Research lab FAIR is developing a new type of watermarking technology called Stable Signature, which aims to prevent the removal of invisible markers from the metadata of AI-generated content. Additionally, Meta has started training several Large Language Models (LLMs) on its Community Standards to help AI determine if certain content violates the policy.

The social media labels are expected to roll out in the coming months, with Meta aiming to mitigate the spread of misinformation on its platforms, especially in the lead-up to the major election year of 2024. The company has not provided details on the penalties users may face for not adequately marking their posts or whether images from third-party sources will be marked with visible watermarks.