BSV
$53.11
Vol 30.68m
-1.42%
BTC
$95217
Vol 42475.25m
-2.11%
BCH
$443.87
Vol 341.08m
-2.08%
LTC
$100.26
Vol 810.46m
-0.05%
DOGE
$0.31
Vol 4748.39m
-4.03%
Getting your Trinity Audio player ready...

The United States Copyright Office says that artificial intelligence (AI)-generated deepfakes pose a serious threat to Americans and called on Congress to urgently formulate a new law for the sector.

The Office was charged by Congress with conducting research and collecting public views on the best way to oversee the nascent AI sector, especially since it applies to intellectual property (IP). It called for public feedback in August 2023, and after reviewing over 10,000 responses, it has now published the first in a series of reports on AI’s impact on IP.

The first part dives into digital replicas, which it says spans AI-generated music, robocall impersonations of public figures, images in pornographic videos and more.

While such content isn’t new, AI has accelerated it, and as the technology gets better rapidly, it’s becoming increasingly difficult to determine fake content. One report found that people are now more likely to identify AI images as more real than the actual photos.

The Copyright Office believes that such rapid developments necessitate robust regulations that will protect public figures, private citizens and businesses.

While it acknowledged that some states are making progress, the Office says existing laws don’t provide sufficient legal avenues to protect victims and punish orchestrators. As such, it’s pushing for a new federal law rather than amendments to existing laws.

This law “should be narrower than, and distinct from, the broader’ name, image, and likeness’ protections offered by many states.”

The law should protect ordinary citizens and public figures as “everyone is vulnerable to the harms that unauthorized digital replicas can cause.” Generating and distributing such deepfakes should be punishable, and digital platforms should also bear liability for allowing the distribution, the Office recommends.

One of the popular defenses that critics of AI laws have relied on is the First Amendment, which protects Americans’ right to free speech. The Office proposes that any new law on deepfakes should expressly address free speech concerns, advocating for a “balancing framework, rather than categorical exemptions.”

“It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens. We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods,” commented Director Shira Perlmutter.

The Office’s proposals are expected to inform Congress’s direction on AI regulations. Already, nearly a dozen laws have been proposed to deal with specific aspects of AI. The latest is the NO FAKES Act, which seeks to protect creatives from AI impersonation. It was spurred by a recent publicized clash between ChatGPT maker OpenAI and Hollywood actress Scarlett Johansson over the use of her voice.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

Who wants to be an entrepreneur?
Embodying the big five personality traits could be beneficial for aspiring entrepreneurs, but Block Dojo shows that there is more...
December 20, 2024
UNISOT, PSU China team up for supply chain business intelligence
UNISOT revealed a new partnership with business intelligence and research firm PSU China, which will combine its data with UNISOT's...
December 20, 2024
Advertisement
Advertisement
Advertisement