Federal Trade Commission website homepage displayed on computer screen

FTC calls for stricter regulations, citing significant AI risks

The U.S. Federal Trade Commission (FTC) has lent its voice to the growing calls for stricter artificial intelligence (AI) regulation amid concerns for consumer safety.

The FTC disclosed its stance in an eight-page response to the U.S. Copyright Office’s request for comments on its proposed AI regulatory framework. According to the document, the FTC noted that while AI can potentially improve productivity and efficiency, it is fraught with several risks that could harm unsuspecting consumers.

For the FTC, unsupervised AI development may lead to a proliferation of cyber fraud, privacy violations, and copyright infringement cases. The regulator disclosed in its response that AI risks may have several unintended consequences for the U.S. economy, affecting small businesses and a significant chunk of the workforce.

“The manner in which companies are developing and releasing generative AI tools and other AI products, however, raises concerns about potential harm to consumers, workers, and small businesses,” said the FTC.

“The FTC has been exploring the risks associated with AI use, including violations of consumers’ privacy, automation of discrimination and bias, and turbocharging of deceptive practices, imposter schemes, and other types of scams.”

To protect consumers from unfair practices, the FTC submitted that it would “vigorously use the full range of its authorities” without exemptions to mitigate AI risks akin to its stance on blockchain and digital currencies. Over the last 12 months, the regulator has upped the ante against AI developers, famously launching a significant investigation into OpenAI for allegedly breaching consumer protection laws.

The streak of enforcement actions against Big Tech firms rolled on, punctuated by a probe on Amazon and Ring for illegally using data obtained by Alexa to train AI models in violation of privacy rules. The regulator scored wins over two firms for attempting to fraudulently use machine learning models to offer investment advice to unsuspecting users.

“These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets,” said the FTC. “In addition, AI tools can be used to facilitate collusive behavior that unfairly inflates prices, precisely target price discrimination, or otherwise manipulate outputs.”

Attempts to solve the copyright puzzle

In response to the U.S. Copyright Office, the FTC organized a roundtable with creatives spanning music, art, movies, and software developers in attendance in a valiant attempt to resolve the intellectual property (IP) concerns associated with AI models.

The FTC noted that there is a consensus among participants that AI developers used their copyrighted materials to train their machine learning models, expressing a collective fear for their livelihoods. In its submission, the FTC suggests robust mechanisms to give creators more significant control over their work and increased transparency from AI developers with participants to the roundtable seeking appropriate credit and compensation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: CoinGeek Conversations with Owen Vaughan & Alessio Pagani: Blockchain can bring accountability to AI

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.