The UK Government has published a White Paper setting out how it proposes to regulate Artificial Intelligence (AI).  

This article covers the following key points organisations looking to procure, develop and deploy AI need to know:

  • the objectives of the UK's approach: to drive growth and prosperity; increase public trust; strengthen the UK's position as a global leader;
  • the principles of the UK's approach: pro-innovation; proportionate; trustworthy; adaptable; clear; collaborative;
  • AI defined: there is no AI definition but the key features of AI which mean a bespoke regulatory framework is required are adaptability and autonomy; these features help ensure a common understanding of what is being discussed (a point we made in our glossary of current and anticipated laws and regulations relevant to AI in the UK and EU);
  • a context specific approach: existing regulators are to apply existing regulations and publish practical guidance.  The proposed regulatory framework will cover foundational models, including large language models like OpenAI's GPT4 and Google's BARD. 
  • non-statutory cross-sectoral principles to guide regulator responses: Safety, security and robustness; Appropriate transparency and explainability; Fairness; Accountability and governance; Contestability and redress.
  • new central government functions to support regulators;
  • what next: the paper sets out specific actions / expectations of government and regulators in the short, medium and long-term; these include for 'key regulators' to publish their guidance on applying the principles between September 2023 and March 2024.  We can also expect the White Paper to be scrutinised as part of the House of Commons Science and Technology Committee's inquiry into the governance of AI.

The proposed regulatory framework applies to the whole of the UK. It does not change the territorial applicability of existing legislation relevant to AI (including, for example, data protection legislation).  It does not seek to address wider societal and global challenges related to the development and use of AI, such as access to date, compute capacity, and balancing the rights of content producers and AI developers (or, for example, proposals for an AI Convention).

Organisations, public and private, looking to procure, develop and deploy AI systems need to be aware:

The UK's approach: objectives and principles

The objectives of the regulatory approach are to:

  • Drive growth and prosperity by making responsible innovation easier and reducing regulatory uncertainty.
  • Increase public trust in AI by addressing risks and protecting our fundamental values.
  • Strengthen the UK’s position as a global leader in AI. The development of AI technologies can address some of the most pressing global challenges, from climate change to future pandemics. There is also growing international recognition that AI requires new regulatory responses to guide responsible innovation.

The 'essential characteristics' of the regulatory regime are:

  • Pro-innovation: enabling rather than stifling responsible innovation.
  • Proportionate: avoiding unnecessary or disproportionate burdens for businesses and regulators.
  • Trustworthy: addressing real risks and fostering public trust in AI in order to promote and encourage its uptake.
  • Adaptable: enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.
  • Clear: making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.
  • Collaborative: encouraging government, regulators, and industry to work together to facilitate AI innovation, build trust and ensure that the voice of the public is heard and considered.

The above is a statement of the Government's intent and direction, relevant to regulators and industry.  The UK Government is aware of the EU's proposed AI Act.  Concerns have been raised about the impact of the EU AI Act on start-ups (for example, in this survey).  The UK has actively chosen to pursue a different approach.  For example, the UK Government's policy paper in July 2022 (the forerunner of the White Paper) said that setting a 'relatively fixed definition' of AI was, in their belief, not the right approach for the UK.

AI defined(?)

How is AI different to other technologies so that a specific regulatory framework is warranted?  The two key characteristics of AI systems, arising on their own or together, are:

  • adaptivity - AI systems are trained (once or continually) and operate by inferring patterns and connections which often are not easily discernible to humans.  Through such training, AI systems can develop the ability to perform new forms of inference not foreseen by their human programmers.
  • autonomy - some AI systems can make decisions without the express intent or ongoing control of a human.

The existence of one of the above, or a combination, means that it can be difficult to explain, predict or control the AI system outputs, and challenging to allocate responsibility for its operation and outputs.

The aim of using characteristics is to future-proof the framework against unanticipated new technologies but, if needed, will be adapted.

Context-specific approach

The regulatory framework is context specific.  It focuses on the outcomes AI is likely to generate in particular applications.  There will not be rules or risk levels for entire sectors or technologies.  Existing regulators will implement the proposed regulatory framework. The justification is that existing regulators understand their sectors, are best placed to conduct AI risk assessments, and determine how existing regulations should be applied or adapted.

This is in contrast to the EU's approach under the EU AI Act which, amongst other things, will introduce 'horizontal' regulation which cuts across multiple sectors (click here for a one page flowchart to navigate the EU AI Act). 

The effectiveness of the proposed regulatory framework will depend, in part, on the regulators. The White Paper recognises that existing regulators and organisations vary in how much work they have done to adapt existing regulations to AI.  For example: the FCA are due to report on their consultation into AI in financial services and are active in examining responsible AI; the ICO has published guidance on AI and data protection. However, regulators also have differing levels of capability to understand AI, including: the technology; its use cases; and impacts on business models.

Cross-sectoral principles

Existing regulators will be expected to implement the framework using five 'values-focussed' cross-sectoral principles.  These build on the OECD's AI principles, although do not mirror the language exactly.

The principles are intended to be applied by regulators proportionately.  This suggests that regulators should focus on the AI systems and uses of the highest risks (similar to how the EU AI Act sets out different obligations and restrictions for AI uses of different levels of risk).

The principles are also intended to be applied so as to complement existing regulation and be in accordance with existing laws and regulations. Regulators, individually or potentially collectively, will produce guidance on how the principles apply and what best practice looks like. Examples of this already exist, such as the NHS AI and Digital Regulations Service offering a simpler 'shop front' for those they regulate. It is possible for principles to conflict with each other and with other regulation; again, regulators (individually and collectively) will need to consider what is appropriate in the circumstances.  

The principles will be issued on a non-statutory basis.  That provides government with flexibility to change them further to monitoring and evaluating their use.  However, the White Paper notes that some regulators have 'expressed concerns that they lack the statutory basis to consider the application of the principles.'  New laws remain a possibility.  A potential new duty 'requiring regulators to have due regard to the principles' is mooted. 

The following table sets out those principles, how they are defined and explained, and factors that the white paper expects regulators will want to consider when implementing or providing guidance about those principles.

PrincipleDefinition and explanationFactors regulators may wish to consider when providing guidance / implementing 
Safety, security, robustness

AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed.

Regulators may need to introduce measures for regulated entities to ensure that AI systems are technically secure and function reliably as intended throughout their entire life cycle.

Provide guidance about this principle including considerations of good cybersecurity practices and privacy practices.

Refer to a risk management framework that AI life cycle actors should apply.
Appropriate transparency and explainability

AI systems should be appropriately transparent and explainable.

Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used). Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system.

An appropriate level of transparency and explainability will mean that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles (for example, to identify accountability). An appropriate degree of transparency and explainability should be proportionate to the risk(s) presented by an AI system.

Set expectations for AI life cycle actors to proactively or retrospectively provide information relating to: the nature and purpose of the AI; the data being used and information relating to training data; the logical and process used; accountability for the AI and any specific outcomes.

Set explainability requirements, particularly for high risk systems.
Fairness

AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Actors involved in all stages of the AI life cycle should consider definitions of fairness that are appropriate to a system’s use, outcomes and the application of relevant law.

Fairness is a concept embedded across many areas of law and regulation, including equality and human rights, data protection, consumer and competition law, public and common law, and rules protecting vulnerable people.

Interpret fairness for their sector and decide when it is important and relevant.

Design, implement and enforce appropriate governance requirements for fairness

If a decision involving AI has a legal or significant effect on an individual, consider whether AI system operator needs to provide an appropriate justification.
Accountability and governance

Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.

AI life cycle actors should take steps to consider, incorporate and adhere to the principles and introduce measures necessary for the effective implementation of the principles at all stages of the AI life cycle.

Determine who is accountable for compliance with existing regulations and principles.

Produce guidance on governance mechanisms.
Contestability and redressWhere appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.Guidance on where to direct a complaint or dispute by those affected by AI harms.

Clarify interactions with requirements of appropriate transparency and explainability, acting as pre-conditions of effective redress and contestability
For each of the above, regulators should also consider the role of technical standards (except contestability and redress where this is not mentioned) .

Central government supporting functions

Government intends to establish mechanisms in place to coordinate, monitor and adapt the regulatory framework.  What these look like in practice is not yet known; the government intends to publish further information in the next year (see What's next?). 

  • Monitoring, assessment and feedback - a central monitoring and evaluation framework to assess the new framework's impact; gather relevant data; support regulators do the same;  
  • Support coherent implementation of the principles - maintain central regulatory guidance to support regulators in implementing the principles; identify barriers for regulators (e.g. scope of regulatory remit, regulatory powers and capabilities); identify conflicts or inconsistences in the way the principles are interpreted by regulators; monitor and assess the principles 
  • Cross-sectoral risk assessment - maintain a society-wide AI risk register; monitor & review known risks and identify new & emerging risks; work with regulators to clarify responsibilities for those risks; support 'join-up' between regulators on cross-cutting AI-related risks; identify inadequately covered risks; share risk enforcement best practices
  • Support for innovators (including testbeds and sandboxes) - assist innovators to navigate regulatory complexity; identify barriers
  • Education and awareness - guidance to businesses trying to navigate regulations; raise awareness to consumers and the public;
  • Horizon scanning - monitor emerging trends and opportunities; proactively convene industry and stakeholders to establish how regulations can support the AI ecosystem
  • Ensuring interoperability with international regulatory frameworks - monitor alignment between UK principles and international approaches

What next

The White Paper sets out what to expect in the short and medium term:

  • in the first 6 months: stakeholder engagement; government's response to be published; issue the cross-sectoral principles to regulators; design and publish an AI Regulation Roadmap.
  • in the 6-12 months after publication: agree partnership agreements to deliver the first central functions; encourage 'key regulators' to publish guidance on how the cross-sectoral principles apply within their remit; publish proposals for a central monitoring and evaluation framework;
  • 12 months + after publication: deliver a first iteration of all the central functions; encourage remaining regulators to publish guidance; publish a draft central, cross-economy AI risk-register for consultation; publish the first monitoring and evaluation report; publish an updated AI Regulation Roadmap. 

AI regulations are coming.  We are actively involved with them (for example, we responded to the government's July 2022 policy paper).  If you would like to respond to the consultation or discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong

For a one page overview of current and proposed regulations relevant to AI in the UK, EU and US (as at January 2023) click here.