United States flag technology concept

US comprehensive AI regulation in the horizon following new bipartisan framework

As artificial intelligence (AI) reaches new heights, two U.S. lawmakers have unveiled a comprehensive blueprint to regulate the sector, pushing for creating a licensing regime for industry participants.

The bipartisan legislative framework, introduced by Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT), provides that firms developing generative AI models or systems deployed for facial recognition should seek registration from an independent oversight body.

Both lawmakers say that the body will exercise regulatory authority over AI developers, conduct regular audits, and monitor AI adoption’s technological and economic impacts.

One key arm of the proposed regulatory framework is establishing civil and criminal liability for AI developers. The lawmakers are urging Congress to roll out new laws in cases where existing laws cannot provide relief for victims of AI misuse.

“Congress should ensure that A.I. companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms,” read a joint statement from the senators.

Aware of the threat of AI to national security, the proposed regulatory framework urges Congress to impose guardrails on the export of AI models to adversary nations, including Russia and China. The proposed rule book is also keen on limiting the export of U.S. machine learning models to countries with “gross human rights violations.”

Currently, the U.S. has rolled out a series of export restrictions against selling semiconductors and chips to China, with hardware manufacturers Nvidia (NASDAQ: NV) and AMD precluded from selling to certain Middle Eastern countries.

The consumer-centric rules make provision for the right of users to have affirmative notice before interacting with AI systems and have unrestricted access to model information via a public database. Developers are expected to label all AI-generated content clearly and, in the case of deepfakes, are expected to provide additional technical disclosures.

“Consumers should have control over how their personal data is used in A.I. systems, and strict limits should be imposed on generative A.I. involving kids,” read the tweet.

Brewing AI activity in the US

As with every innovative technology, the regulatory scene in the U.S. is brewing with activity as authorities scramble to roll out rules for safe usage. The U.S. Copyright Office is currently seeking public opinion on the impact of AI on intellectual property, grappling with several key issues, including the copyright status of AI-generated content.

Ahead of the 2024 general election, the Federal Elections Commission (FEC) has come up with a plan to regulate the use of deepfakes in the build-up to the election season.

Several AI developers, including OpenAI and Meta (NASDAQ: META), are in U.S. courts over claims of unjust enrichment, privacy rights, and copyright violations.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Does AI know what it’s doing?

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.