Getting your Trinity Audio player ready...
|
Barely two months after its presentation, the proposed European Union (EU) AI Act has received a thumbs up from the European Parliament amid growing concerns surrounding the technology.
The region’s legislative body took a negotiating position after 499 members voted in support of the new AI legal framework. Only 28 members voted against the bill with 93 abstentions, setting the stage for member countries to decide the final shape of the framework.
The new framework takes a preemptive approach toward mitigating the risks associated with AI use in the region. The bill notes that the Parliament seeks to align AI with existing values of privacy, non-discrimination, transparency, and safety.
“The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law,” said Parliament member Dragoș Tudorache.
Right out the bat, the legal framework proposes a ban on predictive policing systems relying on past criminal behavior and emotion recognition systems in security and educational institutions. AI platforms for real-time remote biometric identification systems in public are prohibited but may be allowed for the prosecution of heinous crimes after seeking judicial approval.
The law attempts to classify AI platforms according to risks, with systems capable of influencing voters’ behavior during elections classified as high-risk. Systems capable of affecting individual health or safety and the environment were designated high-risk AI networks.
Generative AI platforms are also expected to clearly label all AI-generated content and provide information used in the training of their models. As an added layer of safety, the new EU rules require all AI systems to be tested in regulatory sandboxes before their full-scale launch.
A flurry of AI activities in Europe
Europe has become the main hub for AI regulations, with several member countries seizing the initiative to police the burgeoning sector. Italian regulators temporarily banned OpenAI’s ChatGPT, while the U.K. has secured priority access for AI models from major players.
The valiant attempts at tight control have drawn criticism, with OpenAI CEO Sam Altman warning on the dangers of overregulation. The proposed framework follows the final approval of the EU’s Markets in Crypto-Asset (MiCA) law to regulate the digital currency industry in the region.
Watch: Regulation of Digital Assets & Digital Asset Businesses