The Competition and Markets Authority (“CMA”) has published a response to the recent UK Government White Paper “A pro-innovation approach to AI regulation”.

The CMA supports the government’s approach of leveraging and building on existing regulatory regimes whilst also establishing a central coordination function for monitoring and support. It highlights that AI is creating many opportunities for business to deliver more useful, accessible and personalised online services, but it also sees potential risks that AI can pose in its remit. These include incumbent firms’ ability to self-preference at the expense of new innovators, giving consumers false or misleading information, or insufficient transparency for consumers and businesses.

Here we highlight the key points from the CMA's response.

The CMA’s views on the White Paper

The CMA has four key messages: the CMA:

  • supports government’s approach of initially placing the cross-sector principles on a non-statutory footing;
  • has started to consider how the proposed principles might apply to their current and future remit;
  • recognises the need for the central co-ordination functions to support the implementation, monitoring and development of the framework and promote coherence across regulators;
  • supports cross-regulatory coordination and coherence through the Digital Regulation Cooperation Forum (DRCF) and other initiatives.

Statutory duties and AI Principles

The CMA agrees with government on the importance of monitoring the effectiveness of the non-statutory approach before moving to a statutory one.

The CMA also agrees that introducing a new duty for regulators to have due regard to the principles could increase the effectiveness of the principles’ use in AI regulation. However, the CMA cannot enforce those principles directly, but only when they intersect with the CMA’s existing duties and responsibilities.

Initial thinking on how the CMA will apply the White Paper framework

The CMA has started to consider how best it might be able to provide guidance on how it interprets the principles in relation to its remit. The CMA agrees that joint guidance with other regulators may be appropriate and recognises that any guidance must be consistent with existing guidance, and create further clarity, not confusion, for firms.

The CMA has set out detailed comments on the White Paper principles to help produce consistency in approach. In summary:

  • Safety, security, robustness – the CMA understands that harms to competition tend to be long-term, structural and indirect economic effects. Additionally, consumers can suffer when their rights under consumer protection law are infringed. The CMA looks at safety via these lenses whereas other regulators will have their own, different lenses.
  • Appropriate transparency and explainability – this is aligned with the CMA’s competition and consumer protection objectives. Existing remedies available to the CMA may support and align with the White Paper’s principle of appropriate transparency. There are limits to transparency though, such as the need to protect confidential information or intellectual property rights, as well as mitigating the risk of gaming, manipulation, or facilitation of collusion.
  • Fairness – there is overlap between this principle and the CMA’s remit. The CMA has a role to tackle the risks of bias produced by AI systems but recognises that it is ‘not best placed’ to tackle all of these biases. The CMA believes that defining fairness must look at the context around an AI system as well as the AI system itself, for example: data collection; testing and evaluation practices. Defining fairness will be context specific.
  • Accountability and governance – the CMA already holds legal persons responsible for the effects of AI systems that they deploy in relation to the CMA’s remit, and is considering how they can do so in the Digital Markets, Competition and Consumers Bill. However, the CMA understands that there may be some novel challenges regarding accountability for certain AI systems, such as (hypothetically) various algorithms learning to reach collusive outcomes without any explicit coordination, information sharing, or intention by human operators. The CMA welcomes further discussion with the government and other regulators on such novel situations.
  • Contestability and redress - the CMA states that the opacity of algorithmic systems and lack of operational transparency make it hard for consumers and customers to effectively discipline firms. Clear routes to dress and/or challenge have a deterrent effect. Against this backdrop, it is essential that regulators are adequately equipped with the resources and expertise to monitor potential harms in their remits and the powers to act where necessary. The CMA goes on to explain that it is building up its capability in this regard.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Shachi Nathdwarawala, Tom Whittaker or Brian Wong or other member of Burges Salmon’s Technology or Competition team.