Future artificial intelligence and machine learning for AI droid robot or cyborg

US lawmaker calls for clear labeling of AI-generated content to curb misinformation

Getting your Trinity Audio player ready...

As the world grapples with the grim realities of the misuse of artificial intelligence (AI), U.S. Senator Michael Bennet (D-CO) has called on technology companies to label AI-generated content clearly.

In a letter directed at the CEOs of Meta (NASDAQ: META), Alphabet (NASDAQ: GOOGL), OpenAI, Microsoft (NASDAQ: MSFT), Twitter, and TikTok, Bennet warned that AI poses significant risks to society in the wake of galloping adoption rates. Bennet stated that the pace of adoption has “outpaced” existing safeguards, urging tech firms to seize the initiatives to ensure safe AI usage.

Bennet’s letter suggested that AI firms should make AI-generated content easily identifiable to deal with the existing threat of misinformation and propaganda. He added that wide-scale industry collaboration is required “to combat the spread of unlabeled AI.”

“Developers should work to watermark video and images at the time of creation, and platforms should commit to attaching labels and disclosures at the time of distribution,” wrote Bennet. “A combined approach is required to deal with this singular threat.”

Currently, several firms have indicated an interest in labeling AI-generated content following growing concerns. Google disclosed that it would append a written disclosure on AI-generated images on its platform, while Microsoft, DALL-E, and Midjourney have all agreed to use watermarks to distinguish AI content.

However, Bennet raised the alarm over the ease with which users may circumnavigate the watermark rules, citing Stable Diffusion’s open-source nature. He added that the labeling should be conspicuous and place a premium on AI usage on political accounts while making necessary disclosures to regulators on their efforts.

“AI system developers must scrutinize whether their models can be used to manipulate and misinform, and should conduct public risk assessments and create action plans to identify and mitigate these vulnerabilities,” read Bennet’s letter.

Generative AI developers are expected to submit a reply to Bennet’s letter containing information on the actionable steps following a breach of watermark rules by July 31.

AI rules are several months away

While several jurisdictions like the European Union (EU) have moved ahead to create draft regulations for AI, pundits have noted that the rules may not be operational until 2025.

In the absence of these rules, consumer groups have called on governments to take preemptive steps to safeguard users from the dangers of AI. Meanwhile, a group of technologists, including Tesla (NASDAQ: TSLA) CEO Elon Musk have called for a six-month moratorium on AI development to allow regulations to catch up with innovations.

Experts posit that AI misuse may “derail stock markets and suppress voter turnout” while adversely affecting sectors like Web3, healthcare, and the broader financial system.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.