White Male Robot AI illustration

EU to carry risk assessments on AI and advance semiconductors

The European Union (EU) could be rolling out export restrictions for artificial intelligence (AI) products and advanced semiconductors if it deems the technology poses grave risks to the region’s security.

In a press release, the European Commission ordered the launch of a risk assessment of innovative technologies by member states. Affected technologies include AI, advanced semiconductors, biotechnologies, and quantum technologies, with member nations given until the end of the year to submit their reports.

The Commission will assess the risks of the four emerging technologies on several grounds, including the transformative nature of the technology, its potential for human rights violations, and the “risk of civil and military fusion.”

The Commission deems it will be “objective” in nature, with the results forming the foundation for future trade restrictions or new measures to enhance the region’s competitiveness. Member states are expected to interface with the private sector, academia, and the general public in completing the assessment.

“We need to continuously monitor our critical technologies, assess our risk exposure and – as and when necessary – take measures to preserve our strategic interests and our security,” said Thierry Breton, Commission for Internal Market.

Analysts have pointed to AI as the technology with the most potential for disruption, forcing the EU to come up with AI legislation to keep pace with ecosystem innovation. Launched in June, the EU has since received criticisms from AI developers over the stifling nature of the proposed regulatory framework.

One month after the legislation was made public, a group of technology executives penned a strongly worded letter to the EU, warning that the rules could force leading AI developers out of the region. Github led other firms in protest against the rulebook for its harsh stance on open-source AI, urging the lawmakers to amend the provisions.

Unruffled by the protests from tech executives, the EU continues to see its incoming AI rulebook as the guiding light for other jurisdictions to follow in regulating AI, pointing to its legal framework for digital currencies as an example.

“I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance, and guiding innovation,” said EU President Ursula von der Leyen.

A brewing AI cold war

A cold war has been brewing between the U.S. and China, centered around AI, with U.S. authorities raising alarm over its national security. The U.S. government fired the first salvo by restricting semiconductor manufacturers from exporting AI chips to mainland China.

China took its pound of flesh by announcing export restrictions on Gallium and Germanium, raw materials necessary to produce semiconductors. Amid the U.S.-China row, a frenetic AI arms race is underway, with Saudi Arabia, the United Arab Emirates (UAE), and the United Kingdom reportedly placing orders for AI hardware to build their homegrown generative AI products.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch Combat IQ’s Tim Malik: Harnessing the powers of AI and blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.