Image of scope scanning and bitcoin symbol over flag of eu

EU considers tightening rules for large AI developers amid opposition to strict approach

The European Union is reportedly negotiating over the finer details of a plan to impose stricter rules on artificial intelligence developers ahead of the launch of its widely anticipated legal framework.

The new provisions will be focused on large AI firms and their products. Still, they will ensure that “new startups aren’t overly burdened by regulation.” Although authorities have not officially confirmed the plans, individuals familiar with the matter say the provisions will address the rising risks of large AI firms, as per a Bloomberg report.

OpenAI and Google (NASDAQ: GOOGL) are considered the most affected entities by the incoming rules. Bloomberg states the plans are “not yet laid out in a written draft and subject to change.” If all goes according to plan, the two tech giants will be held to a higher operational standard with increased regulatory scrutiny.

While still tenuous, the rules are expected to address concerns about privacy, intellectual property, and equality associated with AI use. To ensure the safe usage of AI systems, copyright holders may expect prominent players in the space to seek their express approval before using their data to train AI models.

The rules could force the most prominent AI companies to share details of their company’s internal frameworks to prevent misuse and achieve compliance with existing EU data and privacy rules.

One key area of interest for the EU negotiators is the need to prevent anticompetitive practices in the AI space, allowing new companies to offer their services to consumers. French antitrust authorities have already raided NVIDIA’s (NASDAQ: NVDA) offices in Paris as part of investigations revolving around perceived anticompetitive moves in the AI and semiconductor space.

The U.K.’s Competition and Markets Authority has warned of the risks associated with a few companies dominating the markets, “exerting undue influence” at the “expense of innovation.”

The proposed rules are expected to be added to the EU AI Act draft that will likely come into force in 2024. In its current version, the regulations provide for the clear labeling of AI-generated content, a ban on deploying AI systems for predictive policing, and rules against discrimination.

Draft regulation triggers uproar

For consumer organizations, the AI rulebook offers sufficient protection for users. Still, for AI developers, the framework threatens the future of technology in the EU. Days after the release, several technology executives banded together to voice their displeasure against the proposal, warning of an exit of leading firms from the region.

Open-source AI was not excluded from the “harsh rules,” with the EU negotiators pushing for firms in the field to meet the same requirements. Despite its tough stance on AI and other emerging technologies, EU President Ursula von der Leyen stated that the final version will push for balanced legislation, reflecting three pillars—guardrails, governance, and guiding innovation.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Blockchain can bring accountability to AI

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.