Meta will label AI-generated content from OpenAI and Google on Facebook, Instagram


The Meta logo superimposed over a pixelated face in the background.
Meta / Getty Images

reader comments
33

On Tuesday, Meta announced its plan to start labeling AI-generated images from other companies like OpenAI and Google, as reported by Reuters. The move aims to enhance transparency on platforms such as Facebook, Instagram, and Threads by informing users when the content they see is digitally synthesized media rather than an authentic photo or video.

Coming during a US election year that is expected to be contentious, Meta’s decision is part of a larger effort within the tech industry to establish standards for labeling content created using generative AI models, which are capable of producing fake but realistic audio, images, and video from written prompts. (Even non-AI-generated fake content can potentially confuse social media users, as we covered yesterday.)

Meta President of Global Affairs Nick Clegg made the announcement in a blog post on Meta’s website. “We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” wrote Clegg. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve.”

Clegg said that Meta’s initiative to label AI-generated content will expand the company’s existing practice of labeling content generated by its own AI tools to include images created by services from other companies.

“We’re building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards—so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”

public AI image generator.

In the post, Clegg expressed confidence in the companies’ ability to reliably label AI-generated images, though he noted that tools for marking audio and video content are still under development. In the meantime, Meta will require users to label their altered audio and video content, with unspecified penalties for non-compliance.

“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” he wrote.

However, Clegg mentioned that there’s currently no effective way to label AI-generated text, suggesting that it’s too late for such measures to be implemented for written content. This is in line with our reporting that AI detectors for text don’t work.

The announcement comes a day after Meta’s independent oversight board criticized the company’s policy on misleadingly altered videos as overly narrow, recommending that such content be labeled rather than removed. Clegg agreed with the critique, acknowledging that Meta’s existing policies are inadequate for managing the increasing volume of synthetic and hybrid content online. He views the new labeling initiative as a step toward addressing the oversight board’s recommendations and fostering industry-wide momentum for similar measures.

Meta admits that it will not be able to detect AI-generated content that was created without watermarks or metadata, such as images created with some open source AI image synthesis tools. Meta is researching image watermarking technology called Stable Signature that it hopes can be embedded in open source image generators. But as long as pixels are pixels, they can be created using methods outside of tech industry control, and that remains a challenge for AI content detection as open source AI tools become increasingly sophisticated and realistic.

Article Tags:
Article Categories:
Technology