The UK government has announced the key principles and parameters of future regulation of AI in the UK.  The regulations themselves will be set out in a future white paper (expected late 2022) which will take into account responses to a call for evidence launched today.  In the meantime, the announcement shows that the UK shares some of the goals of the EU but is taking a different approach.  Here, we pick out the key points.

Why does AI need to be regulated?

Both the UK and EU appear to be agreed on this at a high-level:

  • some form of regulations are required;
  • the UK announcement says that the proposed rules are to address "future risks and opportunities so businesses are clear how they can develop and use AI systems and consumers are confident they are safe and robust";
  • the UK's proposals will "focus on supporting growth and avoiding unnecessary barriers being placed on businesses"; and
  • "if rules around AI in the UK fail to keep up with fast moving technology, innovation could be stifled and it will become harder for regulators to protect the public".

Core principles (UK) v detailed regulations (EU)

Whilst the need for AI regulation is recognised, how those needs are addressed will differ depending on jurisdiction.

The EU's proposed regulation of AI is detailed, setting out specific actions that those subject to the EU regulations should and should do.

In contrast, the UK regulations will have six "core principles" requiring developers and users to:

  • ensure that AI is used safely;
  • ensure that AI is technically secure and functions as designed;
  • ensure that AI is appropriately transparent and explainable;
  • consider fairness;
  • identify a legal person to be responsible for AI; and
  • clarify routes to seek redress or for contestability.

The announcement explicitly recognises that the UK will take a different approach to the EU; the UK will take a sector-based approach where regulators provide further guidance:

"Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings. This better reflects the growing use of AI in a range of sectors."

"Regulators - such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency - will be asked to interpret and implement the principles."

Regulators will be encouraged to consider lighter touch options which could include guidance and voluntary measures or creating sandboxes.

How this plays out in practice will be of interest.  In particular, there is a risk of regulatory overlap or conflicting guidance.  The EU AI Act is currently being debated but proposed amendments already recognise the risk that EU AI regulations overlap with, or are inconsistent with, existing EU legislation for specific sectors.  However, regulators in the UK have experience in co-operating with each other (for example, on algorithmic transparency)  are likely to have the issue of regulatory overlap on their radar.

The UK government believes that this approach "will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to boost productivity and growth".

Given the government's stated ambitions, references to 'rulebook' may give the wrong impression of what will be produced.  We will have to wait until the UK's AI White Paper later in 2022 to see what the proposed regulations look like in practice.  Even then, expect further refinement.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Martin Cook.