The UK government has published a policy statement on how it will approach AI governance and regulation. It provides a clear direction of travel ahead of further detail on implementation plans expected in the White Paper in late 2022.  

This article picks out the key points which are in summary:

  • the UK government explicitly takes a different approach to the EU but seeks to build international co-operation around shaping AI regulation.  There is, of course, a risk of multiple and varying regulatory approaches to AI which might share underlying principles but are different in practice.
  • the UK is to regulate the use of AI, not the technology itself.  Government to provide cross-sectoral principles to address underlying issues and risks which require a coherent response.  Principles will not be statutory. This is so that the UK can adapt with evolving technology and how AI systems are used.
  • regulators are to be responsible for designing and implementing proportionate regulatory responses to high-risk uses (not hypothetical or low risk uses). 
  • regulators are to consider a light-touch approach, including encouragement for: the use of guidance and voluntary agreements; working with existing processes; and regulatory co-ordination.
  • regulators may need their powers and remits updated.
  • there is no legislative definition of AI but regulators may set out specific definitions.
  • legislation to be limited to ensure regulators can do their jobs or where it is the only viable option to address a high-impact risk.

The UK's regulatory approach for its "thriving AI ecosystem"

The UK has a "thriving AI ecosystem". In 2021, the UK was first in Europe and third in the world for private investment in AI companies ($4.65 billion) and newly funded AI companies (49). The UK is also first in Europe for the number of AI publications in 2021, and only topped by China, the USA and India.

The government are approaching regulation by balancing the various needs of the UK's AI ecosystem.  In their own words:

Rt Hon Nadine Dorries MP, Secretary of State for Digital, Culture, Media and Sport
"AI is unlocking enormous opportunities... I want the UK to be the best place in the world to found and grow an AI business and to strengthen the UK’s position so we translate AI’s tremendous potential into growth and societal benefits across the UK. 

Our regulatory approach will be a key tool in reaching this ambition. A regulatory framework that is proportionate, light-touch and forward-looking is essential to keep pace with the speed of developments in these technologies. Such an approach will drive innovation by offering businesses the clarity and confidence they need to grow while making sure we boost public trust.

Rt Hon Kwasi Kwarteng MP, Secretary of State for Business, Energy and Industrial Strategy
"Our ambition is to support responsible innovation in AI - unleashing the full potential of new technologies, while keeping people safe and secure... We want this framework to be adaptable to AI’s vast range of uses across different industries, and support our world-class regulators in addressing new challenges in a way that catalyses innovation and growth."

What are the drivers of UK regulation?


The policy paper states that any regulation of AI in the UK should be:

  • Context-specific.  We propose to regulate AI based on its use and the impact it has on individuals, groups and businesses within a particular context, and to delegate responsibility for designing and implementing proportionate regulatory responses to regulators. This will ensure that our approach is targeted and supports innovation.
  • Pro-innovation and risk-based.  We propose to focus on addressing issues where there is clear evidence of real risk or missed opportunities. We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI. We want to encourage innovation and avoid placing unnecessary barriers in its way.
  • Coherent.  We propose to establish a set of cross-sectoral principles tailored to the distinct characteristics of AI, with regulators asked to interpret, prioritise and implement these principles within their sectors and domains. In order to achieve coherence and support innovation by making the framework as easy as possible to navigate, we will look for ways to support and encourage regulatory coordination - for example, by working closely with the Digital Regulation Cooperation Forum (DRCF) and other regulators and stakeholders.
  • Proportionate and adaptable.  We propose to set out the cross-sectoral principles on a non-statutory basis in the first instance so our approach remains adaptable - although we will keep this under review. We will ask that regulators consider lighter touch options, such as guidance or voluntary measures, in the first instance. As far as possible, we will also seek to work with existing processes rather than create new ones.

The UK also has an international objective.  The UK will support co-operation on key issues internationally, including through the Council of Europe, OECD working groups and the Global Partnership on AI and through global standards bodies such as ISO and IEC.  

However, whilst the UK shares objectives with the EU when it comes to AI regulation, the UK is consciously taking a different direction (in summary):

The EU has grounded its approach in the product safety regulation of the Single Market (in order to harmonise rules across member states), and as such has set out a "relatively fixed definition" of AI in its legislative proposals. The UK government does not believe this to be the right approach, in that it does not think the EU proposals as they stand "capture the full application of AI and its regulatory implications" (we read to mean a lack of application to different sectoral or regulatory contexts) and so this "lack of granularity could hinder innovation".

Key challenges

The policy statement says that UK AI regulation must address the risks of:

  • Lack of clarity - ambiguity in legal frameworks and application of regulatory bodies to AI which have not been developed with AI and its applications in mind.
  • Overlaps between regulators - risk of contradictory or confusing layers of regulation.
  • Inconsistency - risk of differences between the powers of regulators to address the use of AI within their remit and the extent to which they have started to do so.
  • Gaps in approach - specific risks include: "around the need for improved transparency and explainability in relation to decisions made by AI, incentivising developers to prioritise safety and robustness of AI systems, and clarifying actors’ responsibilities".

Core characteristics of AI

There are two issues about AI as a technology which "demand a bespoke regulatory response and informs the scope of regulation":

  • adaptiveness of the technology - explaining intent or logic.  AI systems often operate on the basis of instructions which have not been expressly programmed; they are 'learnt' on the basis of a variety of techniques.  AI systems are often trained on data and execute according to patterns which are not easily discernible to humans.
  • autonomy of the technology - assigning responsibility for action.  AI systems often operate with a high degree of autonomy.  But that means that decisions can be made without express intent or the ongoing control of a human. 

Cross-sectoral principles

With the above in mind, the current "early proposals" are to:

  • ensure that AI is used safely 
  • ensure that AI is technically secure and functions as designed
  • ensure that AI is appropriately transparent and explainable
  • embed considerations of fairness into AI
  • define legal persons’ responsibility for AI governance 
  • clarify routes to redress or contestability

What these look like in practice will be explained further in the scheduled White Paper, although the policy statement gives an indication of what these mean - such as:

  • Transparency, explainability and fairness
    • It's (largely) up to the regulators
      • Regulators will be tasked with deciding what ‘fairness’ or ‘transparency’ means for AI development or use in the context of their sector. 
      • Regulators will then decide if, when and how their regulated entities will need to implement measures to demonstrate that these principles have been considered or complied with depending on the relevant context.
    • It will differ between regulated sectors
      • "in some settings the public, consumers and businesses may expect and benefit from transparency requirements that improve understanding of AI decision-making".
    • There will likely be transparency statements 
      • "Taking into account considerations of the need to protect confidential information and intellectual property rights, example transparency requirements could include requirements to proactively or retrospectively provide information relating to: (a) the nature and purpose of the AI in question including information relating to any specific outcome, (b) the data being used and information relating to training data, (c) the logic and process used and where relevant information to support explainability of decision making and outcomes, (d) accountability for the AI and any specific outcomes."
    • Potential for some high-risk AI uses to be banned
      • "In some high risk circumstances, regulators may deem that decisions which cannot be explained should be prohibited entirely - for instance in a tribunal where you have a right to challenge the logic of an accusation."
  • Responsibility for AI governance
    • An AI Officer?
      • "Accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person - whether corporate or natural."  
  • Routes to redress and contestability 
    • Who should be able to contest an AI's decision?
      • "Subject to considerations of context and proportionality, the use of AI should not remove an affected individual or group’s ability to contest an outcome. We would therefore expect regulators to implement proportionate measures to ensure the contestability of the outcome of the use of AI in relevant regulated situations."  

The UK government is examining how it can "offer a strong steer to regulators to adopt a proportionate and risk-based approach (for example through government-issued guidance to regulators)".

The proposed cross-sectoral principles build on the OECD's Principles on AI and "demonstrate the UK's commitment to them".  They are "deliberately 'values' focussed", aligning "AI-driven growth and innovation" with the UK's "broader values". 

Significantly, AI regulatory proposals and future regulation "are not... intended to create an extensive new framework of rights for individuals."

Next steps

The White Paper will consider and address:

  • gaps in, and requirements for, regulatory co-ordination
  • the roles, powers, remits and capabilities of regulators
  • the role of technical standards and assurance mechanisms
  • the design of appropriate risk management frameworks, both overall and at individual regulator level

The policy statement invites views from industry, academia and civil society to inform that White Paper.


We are at an early stage in the development of AI regulation in the UK. Some regulators have already made progress in identifying how they might respond to the opportunities and risks of AI.  The UK government has now signalled that more detail will come, both from the government and regulators. 

While we will not know what those regulations look like in practice for a while (and we should not over-rely on the early proposals in the policy statement) we do now know that there will be differing approaches taken internationally, and potentially even within the UK.  Those designing, deploying and using AI will need to carefully navigate these different regulatory regimes.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Martin Cook.