BSV
$65.18
Vol 167.84m
18.4%
BTC
$91230
Vol 137696.98m
3.88%
BCH
$439.55
Vol 910.92m
6.21%
LTC
$80.41
Vol 1758.33m
9.21%
DOGE
$0.39
Vol 26322.39m
2.21%
Getting your Trinity Audio player ready...

In a vote that took place on Wednesday, the European Union’s parliament approved the EU’s AI Act with 523 votes in favor, 46 against, and 49 votes not cast.

The Act, which reached an agreement last December, introduces a regulatory framework that categorizes AI applications based on their perceived risk levels. Lower-risk applications, such as spam filters and content recommendation systems, are subject to minimal regulations; these applications are required to disclose their use of AI to ensure transparency. High-risk AI systems, especially those deployed in sensitive sectors like healthcare, education, and public services, face stringent regulatory requirements; these systems must include detailed documentation of their processes and must have a mandatory human oversight component to their operations.

The Act also imposes outright bans on certain AI applications, such as social scoring systems, predictive policing, and emotion recognition systems in schools and workplaces; each of these applications is prohibited due to their potential to infringe on individual freedoms and rights. Additionally, the Act restricts the use of AI for biometric identification
by police in public spaces, except in cases involving serious crimes such as terrorism or kidnapping.

The AI Act’s implementation is scheduled to begin in 2025, following final approval from EU member states—a process that is expected to result in the Act being officially passed as it has already received the legislative body’s endorsement.

“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” tweeted Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law.

“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added.

Which begs the question, how does the EU bloc plan to turn this new regulation into “reality on the ground?”

Challenges in enforcing the EU’s AI Act

Unless a company is blatantly or publicly breaking the laws outlined in the AI Act, it may be difficult to identify and subsequently penalize companies that are not compliant.

A lot of proprietary technology is developed behind closed doors or “in stealth.” This secrecy can further complicate regulatory oversight, making it challenging for authorities to discern whether a company is deploying AI systems that violate the new rules.

In addition, as with any new law or regulation that might be considered prohibitive to any industry, the AI Act might cause some businesses to relocate their operations outside the EU to avoid compliance costs and potential legal trouble. Others might reconsider whether to make their products or services available in the EU market.

However, if a company is found to be non-compliant, it will have to pay a fine of anywhere from 1.5% to 7% (capped at 35 million euros [$37 million]) of its global revenue.

A global standard for artificial intelligence regulation?

The EU bloc is the first set of countries to pass a new, comprehensive set of regulations aimed at the artificial intelligence space. The bloc hopes that the AI Act will pave the way and be a reference for other countries actively creating laws around artificial intelligence.

Whether other countries will take inspiration from the EU or not is yet to be seen. The policies that come out of the EU are known to put residences/citizens first, protecting that group of individuals at the expense of the businesses that operate in the targeted sector. This is a much different approach than what we typically see when new laws and regulations are created in the United States. A new policy that comes out of America tends to consider businesses, commercial opportunities, and innovation before it considers whether or not U.S. citizens and residents have ample amounts of protection.

However, it is well known that other countries have their eye on AI and are looking to create policies that reduce the harm AI systems can do to society. In America, we saw the Biden administration release an Executive Order on AI that mandated the creation of several programs, organizations, and committees to increase the likelihood that AI systems were being responsibly created, operated, and distributed while mitigating the negative effects that AI could have on the world.

In addition, because 2024 is a big election year for many countries around the world, AI oversight and the creation of AI guidance are only going to become more significant, and a larger part of the conversations that are taking place are about artificial intelligence.

Earlier this year, we saw individuals use generative AI to run disinformation/misinformation campaigns about candidates in upcoming elections. On the opposite end of the equation, we have seen the companies that create generative AI systems, like Meta (NASDAQ: META), trying to be proactive by putting safeguards in place that allow audiences and organizations to quickly identify if the content they are viewing is legitimate or has been AI-generated or AI manipulated in ways that violate their platform’s policies.

AI’s use for illicit activities and the policy and regulation around artificial intelligence will only increase as AI systems worldwide continue to evolve and become more capable in their abilities.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: What does blockchain and AI have in common? It’s data

Recommended for you

Trump’s Cabinet pushes token prices to the moon…and Mars
Aiming to dismantle government bureaucracy, Donald Trump teams up with Elon Musk in setting up D.O.G.E., a new department seen...
November 14, 2024
AI ethics & blockchain: Balance between data utilization & privacy
Becky Liggero moderated the Ethical AI and Blockchain panel at the AI & Blockchain Virtual Expo, which discussed different perspectives...
November 14, 2024
Advertisement
Advertisement
Advertisement