UK Watchdog Lays Out Principles For Generative AI Models

The UK’s Competition and Markets Authority (CMA) has released a report resulting from its review of generative artificial intelligence back in May.

The watchdog has included a list of seven principles that “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models” as potential threats abound. 

Foundation models (FMs) refer to the technology underpinning generative AI (GAI) models like ChatGPT and other tools that produce text, image and voice outputs from typed human prompts.

These “large, general machine-learning models” are “trained on vast amounts of data and can be adapted to a wide range of tasks and operations,” the CMA wrote in its report. 

advertisement

advertisement

The set of principles drafted by the CMA look at how GAI models will be regulated to ensure fair market competition in the future. 

The competition watchdog proposes the following:

Accountability: “FM developers and deployers are accountable for outputs provided to consumers.”

Access: “Ongoing ready access to key inputs, without unnecessary restrictions.”

Diversity: “Sustained diversity of business models, including both open and closed.”

Choice: “Sufficient choice for businesses so they can decide how to use FMs.”

Flexibility: “Having the flexibility to switch and/or use multiple FMs according to need.”

Fair Dealing: “No anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.”

Transparency: “Consumers and businesses are given information about the risks and limitations of FM-generated content so that they can make informed choices.”

Since its emergence and popularity, many people fear that GAI may threaten white-collar jobs across industries and mass-produce disinformation. 

While GAI tools may promise certain benefits as well, providing more rapid solutions to tasks that make up people’s everyday lives, CMA chief executive Sarah Cardell points out that AI is being adopted by people at a “dramatic” rate. 

“We can’t take a positive future for granted,” she adds. “There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy.”

The CMA estimates that around 160 foundation models have been released by a variety of companies that hold a significant section of the generative AI ecosystem, such as Google, Meta, Microsoft, Amazon, OpenAI and Stability AI.

According to the CMA, factors that could undermine the principles they have suggested in their report include mergers or acquisitions that might lead to the lessening of competition in markets, firms blocking “innovative challengers who develop and use FMs,” undue restrictions on a company’s ability to switch between FM providers, restrictive ecosystems, a company’s decision to bundle products and services, the release of “false and misleading content” from companies that impact consumer decision-making.  

The CMA said it will publish a report on how these principles have been received, as well as an updated list of proposed principles, in 2024. 

Next story loading loading..