Meta app

Meta announces AI safeguards for 2024 EU elections

Meta (NASDAQ: META) has announced its plans to combat the misuse of generative AI leading up to the European Union’s upcoming 2024 Parliament Elections. Meta, the owner of social media platforms Facebook, Instagram, and Threads, says that it has invested more than $20 billion into safety and security and quadrupled the size of its global team working in this area to around 40,000 people to help its efforts.

Elections tend to be a breeding ground for dishonest activities. We often see competitors use tactics that put their opponents at a disadvantage while giving themselves an upper hand. In previous elections, Facebook was allegedly one of the most used media to execute attacks and campaigns against political candidates. At the time, the most common attack vectors were misinformation and influence operations, but with the many technological innovations that have taken place since that point in time, Meta now has generative AI attacks on its radar as well.

“Over the last eight years, we’ve rolled out industry-leading transparency tools for ads about social issues, elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform to help combat the spread of misinformation,” said Meta in their blog post outlining its strategy. “More recently, we have committed to taking a responsible approach to new technologies like GenAI. We’ll be drawing on all of these resources in the run up to the election,” it added.

Meta’s steps to label and regulate AI-generated content

Meta has put several measures in place to make sure that content generated by an AI system can be quickly and easily identified. They’ve also updated their platform so that AI-generated content is less likely to appear on a user’s feed if it violates Meta’s Terms of Service.

The company already labels images generated by its own Meta AI. It is in the process of developing tools to label AI-generated images from other major technology providers such as Google (NASDAQ: GOOGL), OpenAI, Microsoft (NASDAQ: MSFT), Adobe, Midjourney, and Shutterstock so that viewers can quickly determine if an image was AI-generated. But in the future, Meta plans to introduce a feature that requires users to disclose when they share AI-generated video or audio content. It says that failure to comply with this disclosure may result in penalties for the user.

From the advertiser side, Meta requires those running ads to disclose when photorealistic images, videos, or realistic-sounding audio has been digitally created or altered with AI. Underlying all these efforts, Meta is training fact-checkers across Europe on the best way to evaluate AI-generated and digitally altered media and is running a media literacy campaign to raise public awareness of how to spot AI-generated content.

Additionally, Meta has joined two organizations—Partnership on AI and the Tech Accord—that aim to address challenges the world faces due to advancements in AI and combat the deceptive use of AI in the 2024 elections, respectively.

All of Meta’s current and planned implementations to combat AI misuse are aimed at aiding the audience in quickly identifying if the content they are viewing is authentic or forged. Being able to distinguish between legitimate and illegitimate content is crucial at this point, and it will likely play a bigger role in our lives as AI systems become more advanced.

Why now: The urgent need for safeguards against AI-induced election misinformation

The timing of Meta’s enhanced focus on tools and safeguards against the misuse of generative AI technologies isn’t a coincidence. Several elections are taking place in 2024 and are prime targets for AI-induced misinformation and disinformation campaigns.

Generative AI systems can create highly realistic and convincing digital content, including images, videos, and audio. While these advancements have the potential to pave the way for innovation and creativity, they also pose significant risks when used for malicious purposes. In the context of an election, the stakes are exceptionally high, as AI-generated content can be used to create false narratives, impersonate political figures, or manipulate facts in ways that are challenging to detect and counter. Such activities can severely impact public opinion, voter behavior, and, ultimately, the integrity of the election process.

Early this year, we saw one of these attacks took place when an individual ran a disinformation campaign in New Hampshire using AI to replicate President Joe Biden’s voice, creating a message telling residents that they did not need to go out and vote for him in an upcoming election. Although the campaign was eventually identified as illegitimate, hours had passed before the word spread that it was not the president and that the phone call was a malicious attack.

With attacks like this likely to increase as elections approach, more companies, especially those that are prime targets for the spread of information and disinformation, will have to put tools and safeguards in place that allow others to easily detect if the content has been AI-generated.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.