The U.K.’s opposition government is amplifying its calls for stricter regulations for artificial intelligence (AI) platforms in the country similar to those that govern the medical, nuclear, and pharmaceutical industries.
Labour Party MP Lucy Powell told The Guardian that large language model (LLM) AI platforms should be licensed before operating in the country to mitigate risks. Powell added that an “interventionist government approach” is required to police the fast-rising industry.
“That is the kind of model we should be thinking about, where you have to have a license in order to build these models,” said Powell. “These seem to me to be good examples of how this can be done.”
By mandating licensing, Powell claims that AI developers will be more transparent about their use of data which will assist regulators in governing the industry.
Powell’s comments follow the government’s decision to closely monitor developments in the AI landscape. A white paper hinted at a plan to leverage AI to improve the country’s economy by increasing GDP by up to 7% but failed to share a blueprint for industry regulation.
“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed, or how they are controlled,” said Powell.
Raging concerns about privacy, copyright infringement, bias, and data leaks forced the U.K. to rethink its strategy for AI regulation. Prime Minister Rishi Sunak’s government floated a new AI task force comprising stakeholders and academics to ensure the safe usage of the technology.
In the event that service providers will be licensed, the new AI task force has been tipped to be the main regulatory player in the industry. The Prime Minister is expected to meet with U.S. policymakers to ensure that the U.K. plays a leading role in crafting a global set of standards for the AI industry.
Nothing but AI-powered chaos
Despite the rising adoption metrics for AI platforms, authorities are grappling with the grim reality of their misuse. Several AI-themed digital currency scams have taken millions of dollars from victims while corporate entities are dealing with data breaches occasioned by generative AI platforms.
In Japan, legislators have predicted a spike in copyright infringement cases against AI platforms, given the illegal use of data in training AI models. There is also the brewing fear of using AI for large-scale cyber attacks and a “rebellion” by AI in the coming years.
Watch: Blockchain can bring accountability to AI
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.