The UK government has published its response to the AI Regulation White Paper consultation, which outlines its pro-innovation regulatory approach to AI.  The UK Government published the White Paper in March 2023 (see our blog post with an overview of the White Paper and separately our response to the consultation).   The response sets out an overall approach based on cross-sectoral principles, a context-specific framework, international leadership and collaboration, and voluntary measures on developers. The UK Government has also paved the way for legislation on AI in the future when the risks of AI become more apparent, with a particular focus on general purpose AI systems.

Here we summarise the key points to know and what to expect next.

Regulatory framework

Broadly, the response reaffirms the UK government’s commitment to the five cross-sectoral principles outlined in the White Paper and confirms the intention to work alongside existing regulators to help them deal with the challenges posed by AI.

Those cross-sectoral principles are:

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The response highlights how several regulators are already taking steps in line with the principles-based approach set out. The UK Government has contacted regulators impacted by AI requesting them to publish an update outlining their strategic approach to AI by 30 April 2024. This is expected to include:

  • An outline of the steps they are taking in line with the expectations set out in the White paper.
  • Analysis of AI-related risks in the sectors and activities they regulate and the actions they are taking to address these.
  • An explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.
  • A forward look of plans and activities over the coming 12 months.

The strategies outlined by regulators will assist with informing the government on whether there are potential weaknesses in the current framework which legislation could address.

Additionally, the UK Government has published new guidance for regulators to support them to interpret and apply the principles. This is intended to drive coordination between regulators in implementing the regulatory framework.

The response confirms that the UK Government will proceed with establishing a central function to assist with delivering the AI regulation framework. Steps already taken include the recruitment of a new multidisciplinary team to assess risk and a commitment to publishing an ‘Introduction to AI assurance’ to assist with building public trust in AI. 

Notably, the response also highlights two methods to ensure AI best practice in the public sector:

  • the Algorithmic Transparency Recording Standard (ATRS) 'established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making' - government 'will now be making use of the ATRS a requirement for all government departments and plan to expand this across the broader public sector over time.
  • government is also using the procurement power of the public sector to 'drive responsible and safe AI innovation' - for example later in 2024 ‘DSIT will launch the AI Management Essentials scheme, setting a minimum good practice standard for companies selling AI products and services. We will consult on introducing this as a mandatory requirement for public sector procurement, using purchasing power to drive responsible innovation in the broader economy.


One of the headline announcements was a commitment to invest over £100 million to support the development and regulation of AI. This can be broken down into:

  • £10 million for regulators to ensure they have the capabilities to cope with the challenges posed by AI.
  • £80 million for AI research through the launch of nine new research hubs across the UK.
  • £9 million into a partnership with the US focussed on responsible AI.
  • £2 million of Arts and Humanities Research Council (AHRC) funding to support research into responsible AI in sectors such as education, policing and creative industries.

This follows on from the £1.5 billion spend in 2023 building the next generation of supercomputers and is a clear acknowledgment that regulators need more funding to help them tackle the risks posed by AI. 

General-purpose AI systems

A large section of the response is dedicated to discussing the challenges posed by highly capable general purpose AI systems. The response acknowledges the substantial risks posed by the fact these models can be used in a wide range of applications across different sectors and may not fall neatly within the remit of any regulator and, more broadly, the context-based approach advocated for in the response. 

The UK Government suggests ‘AI technologies will ultimately require legislative action in every country once understanding of risk has matured’. The response stops short of making any sort of commitment to introducing legislation in the UK instead focussing on the role of voluntary measures in mitigating against the risks posed by these models. Any future binding measures would ensure developers adhere to the principles set out above and would only be introduced if ‘existing mitigations were no longer adequate’. In the short term the government will continue to consult relevant stakeholders throughout 2024 to assess how the regulatory framework is working and to develop understanding of the risks posed by AI in different sectors.

Next steps

The UK government lists the actions it intends to take during 2024, including:

  • Continuing to develop UK domestic policy position on AI regulation.
  • Publishing guidance to ensure the use of AI in HR and recruitment is safe, responsible, and fair.
  • Progressing action to promote AI opportunities and tackle AI risks.
  • Establish a steering committee to support knowledge exchange and coordination on AI governance.
  • Building out the central function and supporting regulators.
  • Encouraging effective AI adoption and providing support for industry, innovators, and employees.
  • Supporting international collaboration on AI governance.

The response reflects the government's ambition to become the international standard bearer for the safe development and deployment of AI, while also harnessing its potential to boost the economy and transform public services. The response acknowledges the need for an agile regulatory system that can adapt to emerging issues and challenges posed by AI but leaves the door open for legislation in the future.

Organisations should be taking action now

Although the UK has adopted a lighter touch approach to AI regulation compared to the EU, businesses should not delay in planning how to manage the real and significant risks which AI presents. A range of legislation and regulation already exists in the UK which impacts how AI is procured, developed and deployed. As outlined above many regulators are already taking action within their domains, and businesses should be ready to adapt and respond to updated guidance or strategies which focus on sector specific activities. Businesses operating across multiple jurisdictions will need to prepare for the imminent arrival of the EU AI Act. 

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David Varney, Tom Whittaker, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (

This article was written by Sam Efiong.