Meta to Label AI Images on Facebook, Instagram, and Threads
Meta announced Tuesday it will begin labeling images detected to be AI-generated across Facebook, Instagram, and Threads.
The social media conglomerate says in a release that the labeling of such images with arrive “in the coming months.” The company is currently building the tools to detect AI-generated images, and the labels will be applied in all languages supported on its platforms. It’s unclear how this will roll out, of course, including whether certain regions of users will see implementation before others.
The timeline is also crucial as deepfake AI-generated images have already been used regarding politicians, and the 2024 election race is already well underway, something the company noted as well.
“We’re taking this approach through the next year, during which a number of important elections are taking place around the world,” the release read. “During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve. What we learn will inform industry best practices and our own approach going forward.”
The company also has its own AI tools that already include an “Imagined with AI” label to photorealistic images. The new warnings for third-party content would come up “when we can detect industry standard indicators that they are AI-generated.” This is something Meta says it is work with “industry partners” to develop.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” the release from Meta adds. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.”
Interestingly, throughout its release, Meta focuses specifically on “photorealistic images.” This makes sense considering the insidious use of deepfakes and misinformation, but it would be interesting and likely useful for further imagery that Meta may not consider “photorealistic.”
In the same vein, Meta said it’s working on similar identification for content beyond images, specifically audio and video. However, that seems to have its own unique obstacles. The company explains:
While companies are starting to include signals in their image generators, they haven’t started including them in AI tools that generate audio and video at the same scale, so we can’t yet detect those signals and label this content from other companies. While the industry works towards this capability, we’re adding a feature for people to disclose when they share AI-generated video or audio so we can add a label to it. We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so. If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label if appropriate, so people have more information and context.
In the meantime, Meta encourages users to also look into the accounts sharing information to determine whether they are trustworthy sources, especially since indicators, like invisible watermarks or metadata can be altered to avoid detection.
Image credits: Meta