Following the CMA’s initial review of artificial intelligence Foundation Models (FMs) which was commenced in May this year, the CMA published its report on 18 September.
FMs are machine learning tools designed for a wide range of outputs. Traditional artificial intelligence models were trained on task-specific data to perform limited functions. FMs go beyond these artificial intelligence models through their machine learning capabilities. They are trained on broad data sets that can be adapted to perform cognitive tasks such as language comprehension, natural conversation, text-generation, or creation of audio content. The performance of FMs is designed to improve over time, as they learn from running an algorithm on a continuous data input. Examples of FMs include ChatGPT, BERT or DALL-E 2.
Whilst FMs have great scope for accelerating transformative growth in the economy, they may pose risks to consumers; for example, challenging data protection, facilitating fraud or spreading inaccurate information.
Accordingly, the report puts forward the following principles, designed to ensure consumer protection and healthy competitive practices:
- Accountability – FM developers and deployers are accountable for outputs provided to consumers.
- Access – ongoing ready access to key inputs, without unnecessary restrictions.
- Diversity – sustained diversity of business models, including both open and closed.
- Choice – sufficient choice for businesses so they can decide how to use FMs.
- Flexibility – having the flexibility to switch and/or use multiple FMs according to need.
- Fair dealing – no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
- Transparency – consumers and businesses are given information about the risks and limitations of FM-generated content so they can make informed choices.
Over the coming months, the CMA intends to undertake a programme of engagement with relevant stakeholders, across the UK and internationally, to develop these principles. Updates from the CMA are likely to be published in early 2024.
If you have any questions or would otherwise like to discuss any issues raised in this article, please contact David Varney or another member of our technology team.
This article was written by Victoria McCarron.
Sarah Cardell, CEO of the CMA, has stated: ‘There remains a real risk that the use of AI develops in a way that undermines consumer trust or is dominated by a few players who exert market power that prevents the full benefits being felt across the economy. The CMA’s role is to help shape these markets in ways that foster strong competition and effective consumer protection, delivering the best outcomes for people and businesses across the UK… that’s why we have today proposed these new principles and launched a broad programme of engagement to help ensure the development and use of foundation models evolves in a way that promotes competition and protects consumers’.