Getting your Trinity Audio player ready...
|
The European Union (EU) is a first mover in its attempt to shape the global landscape of AI governance with the recent agreement that the 27-nation bloc reached on the Artificial Intelligence Act (AI Act).
Deal!#AIAct pic.twitter.com/UwNoqmEHt5
— Thierry Breton (@ThierryBreton) December 8, 2023
The AI Act takes a nuanced “risk-based approach” to AI products and services, focusing on using AI rather than the technology itself. Its core aim is to protect democracy, uphold law, and protect fundamental rights such as freedom of speech while simultaneously fostering investment and innovation.
The EU’s AI Act and its impact on different AI applications
The Act categorizes AI applications based on risk levels. Lower-risk applications, like content recommendation systems or spam filters, face minimal regulations such as disclosing their AI-powered nature. However, high-risk systems, particularly in sensitive sectors like healthcare, education, and public services, are subject to stringent requirements where the systems must have detailed documentation of their processes as well as human oversight of their systems.
The act outright bans certain uses of AI that it considers unacceptably risky, such as social scoring systems that could manipulate behavior. The Act also restricts the use of AI for biometric identification by police in public spaces, barring serious crimes such as terrorism or kidnapping.
To enforce these policies, the EU will establish a European AI Office that will ensure compliance, implementation, and enforcement. If a company is non-compliant, it will have to pay a fine of anywhere from 1.5% to 7% (capped at 35 million euros [$37 million]) of its global revenue.
Balancing innovation and regulation
The European Union takes pride in being the first continent to set clear rules for using AI, but their AI Act raises questions about its impact on innovation. The EU’s “people-first” regulatory philosophy, which emphasizes citizen protection against market risks, is in contrast to the more business-centric approach of nations like the United States, where regulation often prioritizes corporate growth, sometimes at the expense of the citizens.
Whenever a nation regulates a market, it always runs the risk of stifling innovation due to the policies that need to be navigated before progress can be made, and it’s possible that the EU will find itself in this position with artificial intelligence. Leaders like French President Emmanuel Macron have already raised concerns about the potential the AI Act has to hinder European tech companies compared to their global counterparts.
Regardless, as AI-powered technology continues to proliferate around the globe, the demand for regulatory oversight will continue to increase—especially as there are more instances of AI-powered systems producing negative outputs that harm individuals and businesses.
At the moment, the EU has reached a tentative agreement regarding the AI Act, but the final wording of the bill is subject to adjustments and approvals by EU countries and the Parliament. If all is well, the AI Act is expected to come into effect two years after its final approval, which is expected to be voted on within the next year.
In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.
Watch: Does AI know what it’s doing?