The UK government published a policy statement on how it will approach AI regulation in the UK (summarised here).  The government called for evidence ahead of an anticipated White Paper on its specific plans (expected late 2022).  We responded to that consultation.

Our response was informed by our experience advising clients - private and public - through uncertain and novel legal and regulatory landscapes.  Issues we have seen before are likely to arise again, whilst new issues will arise due to the nature of AI systems and the contexts in which they are used.

Given the important issues raised by the development of AI - and the huge potential across multiple sectors in application - we welcome the government's focus in this area and the effort taken to engage with the many, and sometimes conflicting, issues arising from the use of AI.

Here are the key points from our response to the consultation.

The UK's approach

In summary, the UK's policy is for a regulatory framework which:

  • is 'pro-innovation', risk-based and context-specific;
  • is light-touch, involving existing regulators and regulatory mechanisms, with minimal anticipated statutory intervention; and
  • with cross-sectoral principles published by the government to assist coherence between regulators.

In our response we said that government should consider:

  • ways in which greater co-ordination between regulators and non-statutory bodies could take place to ensure a coherent approach to regulation, guidance and enforcement regarding AI systems - perhaps an AI Regulatory Co-operation Forum, akin to the Digital Regulation Co-operation Forum;
  • producing guidance promptly, and reviewing it regularly, as to how the cross-sectoral principles apply in practice;
  • producing guidance as to which 'high-risk' AI systems government intends regulators to address;
  • producing guidance from government and regulators as to how different types of liability will be apportioned between the various stakeholders of an AI system and to what extent those liabilities may change due to the dynamic nature of an AI system’s lifecycle;
  • potential legislation, like a legislative framework for automated decision-making in the public sector identified by the Law Commission as a potential area for consultation; and
  • potential legal taskforces, like the UK Legal Jurisdiction Taskforce which has reported on the legal status and issues of smart contracts and, separately, crypto assets.

We recognise that there are problems with trying to define 'artificial intelligence'.  However, some definition of AI will be required in order to ensure effective monitoring between regulators - knowing that any comparisons are 'like for like'. 

The cross-sectoral principles

The UK's 'early proposals' for cross-sectoral principles to apply to regulators are to:

  • ensure that AI is used safely; 
  • ensure that AI is technically secure and functions as designed;
  • ensure that AI is appropriately transparent and explainable;
  • embed considerations of fairness into AI;
  • define legal persons’ responsibility for AI governance; and 
  • clarify routes to redress or contestability.

The principles will be on a non-statutory basis for now whilst the government monitors their implementation.  What the principles look like in practice will be explained further in the scheduled White Paper, although the policy statement gives an indication of what they mean.

In our response we said that cross-sectoral principles are necessary to assist with regulatory co-ordination and to help AI stakeholders identify key issues, and that government should consider:

  • guidance as to whether, and to what extent, each context in which an AI system operates is analogous to another context or regulatory regime - or will each regulatory intervention be consigned to the specific facts of that context and AI system, providing limited guidance for other AI systems and contexts?
  • the specific wording used for the principles, their explanations, their consistency with international standards, and whether they (reasonably) reflect industry practice.  For example, is it appropriate to say 'ensure that AI is technically secure and functions as designed' or should it be 'functions appropriately throughout its lifecycle'?
  • potential additional cross-sectoral principles, such as a need for 'resilience' or 'human oversight'.

What next?

The Office for AI is now considering responses and is expected to publish a White Paper regarding the UK's position on how it will regulate AI in late 2022.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Martin Cook.