11-22-2024
BSV
$67.73
Vol 220.65m
-2.8%
BTC
$98186
Vol 119177.51m
3.59%
BCH
$488.64
Vol 2331.96m
9.75%
LTC
$90.46
Vol 1477.09m
7.1%
DOGE
$0.38
Vol 9589.31m
0.61%
Getting your Trinity Audio player ready...

The United States Copyright Office says that artificial intelligence (AI)-generated deepfakes pose a serious threat to Americans and called on Congress to urgently formulate a new law for the sector.

The Office was charged by Congress with conducting research and collecting public views on the best way to oversee the nascent AI sector, especially since it applies to intellectual property (IP). It called for public feedback in August 2023, and after reviewing over 10,000 responses, it has now published the first in a series of reports on AI’s impact on IP.

The first part dives into digital replicas, which it says spans AI-generated music, robocall impersonations of public figures, images in pornographic videos and more.

While such content isn’t new, AI has accelerated it, and as the technology gets better rapidly, it’s becoming increasingly difficult to determine fake content. One report found that people are now more likely to identify AI images as more real than the actual photos.

The Copyright Office believes that such rapid developments necessitate robust regulations that will protect public figures, private citizens and businesses.

While it acknowledged that some states are making progress, the Office says existing laws don’t provide sufficient legal avenues to protect victims and punish orchestrators. As such, it’s pushing for a new federal law rather than amendments to existing laws.

This law “should be narrower than, and distinct from, the broader’ name, image, and likeness’ protections offered by many states.”

The law should protect ordinary citizens and public figures as “everyone is vulnerable to the harms that unauthorized digital replicas can cause.” Generating and distributing such deepfakes should be punishable, and digital platforms should also bear liability for allowing the distribution, the Office recommends.

One of the popular defenses that critics of AI laws have relied on is the First Amendment, which protects Americans’ right to free speech. The Office proposes that any new law on deepfakes should expressly address free speech concerns, advocating for a “balancing framework, rather than categorical exemptions.”

“It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens. We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods,” commented Director Shira Perlmutter.

The Office’s proposals are expected to inform Congress’s direction on AI regulations. Already, nearly a dozen laws have been proposed to deal with specific aspects of AI. The latest is the NO FAKES Act, which seeks to protect creatives from AI impersonation. It was spurred by a recent publicized clash between ChatGPT maker OpenAI and Hollywood actress Scarlett Johansson over the use of her voice.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Understanding the dynamics of blockchain & AI

Recommended for you

BIT Mining hit with $10M fine over bribery charges
In its previous existence as a casino and sports lottery firm, BIT Mining reportedly paid $2 million in bogus consultation...
November 21, 2024
Donald Trump’s role in the ‘crypto’ boom
Donald Trump pledged to make the United States the "crypto capital of the world." For the first time in nearly...
November 21, 2024
Advertisement
Advertisement
Advertisement