AI research of robot

UK competition regulator voices generative AI concerns

The chief of the Competition and Market Authority (CMA) has outlined growing concerns regarding artificial intelligence (AI) foundation models, identifying three key interlinked risks to fair, effective, and open competition, including that a concentration of power among just six big tech companies “could lead to winner takes all dynamics.”

Last week, the United Kingdom Competition and Market Authority (CMA) raised concerns about risks in the AI sector to fair, effective, and open competition due to the concentration of power among six major technology companies.

Sarah Cardell, CEO of the CMA, said foundation models—a form of generative AI, such as OpenAI’s ChatGPT—represented a potential “paradigm shift” for society.

The CMA is the principal competition regulator in the U.K. It is a non-ministerial government department responsible for strengthening business competition and preventing and reducing anti-competitive activities.

Speaking at the American Bar Association (ABA) Chair’s Showcase on ‘AI Foundation Models,’ in Washington, Cardell shared highlights from the CMA’s update to its initial report on AI Foundation Models (FMs); the former was published the same day as Cardell’s speech, the latter in September 2023.

“When we started this work, we were curious. Now, with a deeper understanding and having watched developments very closely, we have real concerns,” Cardell said.

“The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences. We’re committed to applying the principles we have developed and to using all legal powers at our disposal, now and in the future, to ensure that this transformational and structurally critical technology delivers on its promise.”

One of the principal concerns voiced by the CMA chief was that a concentration of power in the hands of a few companies would give them “the ability and incentives to shape these markets in their own interests.”

The CMA identified six tech companies at the heart of the AI sector, namely Google (NASDAQ: GOOGL), Microsoft (NASDAQ: MSFT), Meta (NASDAQ: META), Amazon (NASDAQ: AMZN), Apple (NASDAQ: AAPL), and Nvidia (NASDAQ: NVDA), which, through an “interconnected web” of more than 90 investments and partnerships links, could limit diversity and choice in the market.

Specifically, Cardell highlighted three key interlinked risks that these companies might pose to fair, open, and effective competition:

  • firms controlling critical inputs for developing AI models may restrict access to shield themselves from competition; 
  • powerful incumbents could exploit their positions in consumer or business-facing markets to distort choice and restrict competition in deployment;
  • partnerships involving key players could exacerbate existing positions of market power through the value chain.

In her speech, Cardinal said the “winner takes all” dynamics of digital markets had led to the dominance of a few powerful platforms but that she was “determined to apply the lessons of history” to prevent the same thing from happening again in the AI space.

In order to do this, the CMA proposed a set of “underlying principles to help sustain vibrant innovation and to guide the markets toward positive outcomes”: ongoing ready access to key inputs, ensuring sustained diversity of models and model types, enabling sufficient choice for businesses and consumers to decide how to use foundation models, fair dealing, i.e. no anti-competitive bundling, tying or self-preferencing, transparency in terms of consumers and businesses having the right information about risks and limitations of models, and ensuring developer and deployer accountability for outputs.

The CMA first launched its initial review of AI foundation models in May 2023, publishing its analysis in a report last September. The report identified a risk that the markets could develop in ways that would cause concern from a competition and consumer protection standpoint, and so proposed the set of principles “to help sustain innovation and guide these markets toward positive outcomes for businesses, consumers, and the wider economy.”

Despite its concerns, the CMA report also recognized that if its principles can be adopted and risks mitigated, there are “a multitude of benefits these models might bring.”

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Blockchain & AI unlock possibilities

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.