Getting your Trinity Audio player ready...
|
Facebook’s parent company Meta (NASDAQ: META) has barred political advertisers from using its generative artificial intelligence (AI) tools in creating promotions over mounting misinformation concerns.
Meta says its decision to ban political ads from using its proprietary AI tools is in line with the company’s policy on encouraging safe AI innovation and usage, according to a Reuters report. Meta unveiled the new ad policy via an update to its help center, extending the AI ban to other regulated sectors.
“As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features,” said Meta.
The big tech firm appears to hint that the bans may be temporary as it joins forces with regulators to establish a robust framework for AI innovation.
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of generative AI in ads that relate to potentially sensitive topics in regulated industries,” read the update.
Debuting in early October, Meta’s new AI tools allow advertisers to seamlessly create new backgrounds for their ads and make image adjustments and other nifty variations through text prompts. The ban on political and other regulated content comes ahead of Meta’s proposed full-scale rollout of the AI tools to global advertisers across its platforms.
Rather than impose a blanket ban on political advertisement, Google (NASDAQ: GOOGL) veered off a tangent, requiring all political promotions to label AI elements clearly. Google’s new rules, set to come into operation in mid-November, will apply to all image, audio, and video content with room for exemptions.
Google notes that ads using AI for simple image editing techniques and ads that contain AI elements that are “inconsequential” will be exempt from the requirement of labeling.
Deepfakes rattles the FEC
Ahead of the 2024 polls, the U.S. Federal Elections Commission (FEC) has confirmed plans to crack down on the dubious use of AI tools for political campaigns. Top of the FEC’s concerns is the reliance on deepfakes, which it says could be used to mislead voters, citing the circulation of AI-generated videos involving Donald Trump and Ron DeSantis.
Building upon the successes of regulating the use of digital currencies to fund campaigns, the FEC is steeling itself for a brawl with yet another emerging technology. With a public consultation underway, the FEC says it will be adopting a measured approach to prevent muzzling free speech.
“The technology will almost certainly create the opportunity for political actors to deploy it to deceive voters, in ways that extend well beyond any First Amendment protections for political expression, opinion or satire,” read one petition to the FEC.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: AI truly is not generative, it’s synthetic