The UK government published a White Paper for A pro-innovation approach to AI regulation (see our overview here, and flowchart for navigating the White Paper here). The government is consulting on the White Paper; organisations such as the Information Commissioner's Office and Competition and Markets Authority have responded. We have, too.

Our response was informed by our experience advising clients - private and public organisations - through uncertain and novel legal and regulatory landscapes, including on the UK and EU's proposed AI regulations. Issues we have seen before are likely to arise again, while new issues will emerge due to the nature of AI systems and the contexts in which they are used.

Given the important issues raised by the development of AI - and the huge potential across multiple sectors in application - we welcome the government's focus in this area and the effort taken to engage with the many, and sometimes conflicting, issues arising from the use of AI.

Here, in summary, are a few points we think the government may want to consider:

  1. An authoritative review of the legislative and regulatory framework to identify gaps, overlaps, and inconsistencies, and to identify existing strengths of the framework. There is precedence for this; see the Law Commission's work on emerging technologies (covering multiple issues, such as digital assets, and the legal framework for smart contracts), or the work of the UK Jurisdiction Taskforce to produce digital dispute resolution rules). This could be focussed on specific technologies (in the way the CMA is looking at foundation models) or for specific issues or sectors; the possibility of the Law Commission looking at the legal framework for automated decision-making in by public bodies has already been raised.
  2. Introduce an AI regulatory hub to provide a single, user-friendly source to AI guidance (potentially as part of the AI Standards Hub). The White Paper envisages regulators will publish guidance on how the cross-sectoral principles apply under existing regulation. Finding and navigating applicable guidance needs to be made simple.
  3. Introduce an AI regulatory clarification route. Technology providers may find that their systems or products are, or are potentially, subject to multiple regulatory regimes. There should be a straightforward method of seeking clarification from multiple regulators about the applicable guidance simultaneously.
  4. Establish an AI test case procedure in the civil courts for urgent and authoritative judicial guidance on legal issues for disputes concerning emerging technologies. There is precedence for this; see the Financial Test Case Scheme used by the FCA to test the responsiveness of business interruption insurance policies to Covid-19 lockdowns.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Martin Cook, or any other member in our Technology team.