Artificial Intelligence and Copyright graphics

US Copyright Office calls for ‘prompt federal action’ to combat AI deepfakes

Getting your Trinity Audio player ready...

The United States Copyright Office says that artificial intelligence (AI)-generated deepfakes pose a serious threat to Americans and called on Congress to urgently formulate a new law for the sector.

The Office was charged by Congress with conducting research and collecting public views on the best way to oversee the nascent AI sector, especially since it applies to intellectual property (IP). It called for public feedback in August 2023, and after reviewing over 10,000 responses, it has now published the first in a series of reports on AI’s impact on IP.

The first part dives into digital replicas, which it says spans AI-generated music, robocall impersonations of public figures, images in pornographic videos and more.

While such content isn’t new, AI has accelerated it, and as the technology gets better rapidly, it’s becoming increasingly difficult to determine fake content. One report found that people are now more likely to identify AI images as more real than the actual photos.

The Copyright Office believes that such rapid developments necessitate robust regulations that will protect public figures, private citizens and businesses.

While it acknowledged that some states are making progress, the Office says existing laws don’t provide sufficient legal avenues to protect victims and punish orchestrators. As such, it’s pushing for a new federal law rather than amendments to existing laws.

This law “should be narrower than, and distinct from, the broader’ name, image, and likeness’ protections offered by many states.”

The law should protect ordinary citizens and public figures as “everyone is vulnerable to the harms that unauthorized digital replicas can cause.” Generating and distributing such deepfakes should be punishable, and digital platforms should also bear liability for allowing the distribution, the Office recommends.

One of the popular defenses that critics of AI laws have relied on is the First Amendment, which protects Americans’ right to free speech. The Office proposes that any new law on deepfakes should expressly address free speech concerns, advocating for a “balancing framework, rather than categorical exemptions.”

“It has become clear that the distribution of unauthorized digital replicas poses a serious threat not only in the entertainment and political arenas but also for private citizens. We believe there is an urgent need for effective nationwide protection against the harms that can be caused to reputations and livelihoods,” commented Director Shira Perlmutter.

The Office’s proposals are expected to inform Congress’s direction on AI regulations. Already, nearly a dozen laws have been proposed to deal with specific aspects of AI. The latest is the NO FAKES Act, which seeks to protect creatives from AI impersonation. It was spurred by a recent publicized clash between ChatGPT maker OpenAI and Hollywood actress Scarlett Johansson over the use of her voice.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Understanding the dynamics of blockchain & AI

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.