Modern computer with smartphone displaying Meta logo

Meta wants to combat AI images misinformation with invisible watermarks

Social media giant Meta (NASDAQ: META) has reiterated its commitment to ensure responsible artificial intelligence (AI) use with the proposed deployment of invisible watermarks on synthetic images.

In a blog post, the company said the new watermarking feature is designed to “reduce the chances of people mistaking them for human-generated content.” The feature is expected to be integrated into Meta’s text-to-image generator, Imagine, joining the firm’s growing arsenal in cracking down on misinformation with generative AI.

Set to launch in the coming weeks, the technical details for the watermarking features are sparse, although available information suggests that it will be “applied with a deep learning model.” Meta’s incoming watermarking feature is expected to be invisible to the human eye. Still, it will be easily detectable by a corresponding model.

“We’re committed to building responsibly with safety in mind across our products and know how important transparency is when it comes to the content AI generates,” said Meta. “In the coming weeks, we’ll add invisible watermarking to the Imagine with Meta AI experience for increased transparency and traceability.”

Meta says the feature will be impervious to standard photo editing tools to prevent bad actors from exploiting the offering. Per the report, Meta confirmed that cropping, screenshots, or editing the color, brightness, and contrast will not remove the watermark in the image.

While Imagine is Meta’s first attempt to introduce invisible AI, the company disclosed its intention to integrate the feature across its range of AI image generators. The Big Tech firm clarified that using the invisible watermark will not affect the quality of generated images.

Meta’s announcement follows the rollout of over 20 new features to its virtual assistant Meta AI, which the company says will improve its capabilities for users.

In September, Google (NASDAQ: GOOGLlaunched its invisible watermark tool in a valiant attempt to stifle the rising trend of AI-based misinformation. Dubbed SynthID, Google says the watermarking feature can be extended beyond images to videos and text, warning that the tool is not “a silver bullet to the deepfake problem.”

Fighting deepfakes

Outside of copyright issues, analysts have pointed to deepfakes as an existential threat to AI, prompting AI firms to explore new solutions to address the challenge. Apart from invisible watermarks, experts are mulling over the prospects of using blockchain technology to label AI-generated content.

Ahead of the 2024 elections, the U.S. Federal Elections Commission (FEC) has announced its intention to introduce full regulations to combat the scourge of deepfakes, following multiple petitions from concerned non-profit organizations.

“The technology will almost certainly create the opportunity for political actors to deploy it to deceive voters, in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire,” read one petition.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.