On the 22 November 2023, Lord Holmes, a private member of the House of Lords, proposed the Artificial Intelligence (Regulation) Bill  (“the AI Bill”).  If enacted, the Bill would create a new UK Artificial Intelligence “AI” regulator and mandate the introduction of further AI regulation. 

We pick out the key parts of the AI Bill here.  Notably there are similarities and differences with the UK's White Paper on its AI regulation framework, for which the government response and further updates from the Office for AI are awaited.  For those wanting a refresher on the UK's position in its White Paper on AI regulation (March 2023), see our flowchart here and overview here. We identify notable parallels with other AI regulatory updates in italics.

What is AI?

The AI Bill defines Artificial Intelligence and AI to mean:

technology enabling the programming or training of a device or software to-

  • Perceive environments through the use of data;
  • Interpret data using automated processing designated to approximate cognitive abilities; and
  • Make recommendations, predictions or decisions;
  • With a view to achieving a specific objective.

AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.

This is different to the position in the White Paper which: i) intentionally does not define AI as its focus is on regulation AI use cases rather than technology itself, and ii) identifies autonomy and adaptability as the key reasons why AI may require a specific regulatory approach (factors which appear absent from the AI Bill definition of AI).

Further, this is different from the recently updated definition of AI by the OECD (Organisation for Economic Co-operation and Development) which is set to be incorporated into the EU's AI Act. This raises the question whether the definition will be subject to change to align with other AI regulations or will remain different intentionally.

"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."

Purpose - AI requires regulation

The purpose of the AI Bill is to “Make provision for the regulation of artificial intelligence; and for connected purposes.” As currently drafted, the AI Bill is a five page framework to produce more detailed regulations and calls on the Secretary of State to produce these. To summarise:

1. The AI Authority

The Secretary of State must by regulations make provision to create a body called the AI Authority, with functions including:

  • Ensuring that relevant regulators take account of AI and ensuring alignment of approach across relevant regulators in respect of AI. 

    Comment: What is meant by ‘take account’ is not defined.  This may be seen as the ‘flip side of the coin’ for the White Paper's position that the authors ‘anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles’; the AI Bill envisages the AI authority ensuring that the statutory duty (if enacted) is enforced.
     
  • Undertaking a gap analysis of regulatory responsibilities in respect of AI.
  • Co-ordinate a review of ‘relevant’ legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI; 

    Comment: The White Paper notes that there are potential gaps, overlaps and inconsistencies in regulation and legislation in the UK. However, the White Paper is silent on how those issues will be identified or plugged. The AI Bill appears to recognise the issue and that action is required (a point we also made in our response to the White Paper consultation).
     
  • Monitoring and evaluating the overall regulatory framework’s effectiveness, including the extent to which the implementation of the principles support innovation.
    Assessing and monitoring risks across the economy arising from AI.

    Comment: The White Paper envisages monitoring and evaluation functions to support the UK's AI regulatory framework. Here, the AI Bill says that should be of a designated AI Authority.
     
  • Supporting testbeds and sandboxes to get AI to market.
  • Accrediting independent AI auditors.
  • Providing education and awareness to give clarity to businesses.

2. Regulatory principles. 

The AI Authority must have regard to the following principles when performing its role.

First, regulation of AI should deliver:

  • Safety, secureness, and robustness.
  • Transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

Second, that any business which develops, deploys, or uses AI should be transparent about it, test it thoroughly and transparently and comply with applicable laws, including in relation to data protection, privacy, and intellectual property.

Third, AI and its applications should (amongst other things) comply with equalities legislation, be inclusive by design, generate data that are findable, accessible, interoperable and reusable.

Fourth, any burden on a person or the carrying on of an activity in respect of AI should be proportionate to the benefits, taking into consideration the nature of the service or product being delivered, the nature of risk, whether the cost of implementation is proportionate to the risk, and whether the burden or restriction enhances UK international competitiveness.

3. Regulatory sandboxes. 

The AI Authority must collaborate with relevant regulators to construct regulatory sandboxes for AI. A regulatory sandbox allows businesses to test innovative propositions in the market with real consumers and is open to authorised firms, unauthorised firms that require authorisation and technology firms partnering with, or providing services to, UK firms doing regulated activities.

4. AI responsible officers

The Secretary of State, after consulting the AI Authority, must by regulations provide that any business which develops, deploys, or uses AI must have a designated AI officer with duties:

  • To ensure the safe, ethical, unbiased, and non-discriminatory use of AI by the business.
  • To ensure, so far as reasonably practicable, that data used by the business in any AI technology is unbiased.

5. Transparency, IP obligations and labelling

The Secretary of State, after consulting the AI Authority and such other persons as he or she considers appropriate, must by regulations provide that any person involved in training AI must:

Supply to the AI Authority a record of all third-party data and intellectual property “IP” used in that training.

Assure the AI Authority that they use all such data and IP by informed consent, and they comply with all applicable IP and copyright obligations.

6. Public engagement

The AI Authority must implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and consult the general public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators.

Next steps

The AI Bill’s future is uncertain.  The majority of Private Member’s bills do not typically become law.  The UK Government signalled at the time of the AI Safety Summit that it was not in a rush to legislate.  The Bill follows the King’s Speech on the 7 November 2023 which sets out the agenda for the current Parliamentary session and did not contain any proposals from the Government for AI-specific legislation.

However, Lord Holmes, who proposed the Bill, has previously been involved in delivering legislative change for emerging technologies. For example, he played a central role for the Electronic Trade Documents Act which came into force on the 20 September 2023. 

The AI Bill's short-term impact is likely to help engage further political debate around the need for AI regulation to both address risks and opportunities of AI in the UK. 

The AI Bill passed the first reading in the House of Lords on the same day and is currently at the second Parliamentary stage.  You can track the Bill’s progress here.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David VarneyMartin Cook or any other member in our Technology team.

Related Articles

If you are looking for further relevant information, see ‘The Artificial Intelligence (AI) Law, Regulation and Policy Glossary - a selection of key AI terms and their definitions, identifying where they are found in anticipated UK and EU laws and/or regulations, regulatory guidance and UK AI policy.

 This article was written by Nicole Simpson