The UK has chosen to set its own course for AI regulation. 

Below we summarise the UK’s current regulatory approach to AI. 

Pro-innovation: The overarching message affirms the government’s commitment to a pro-innovation approach to AI with the focus being on building trust and confidence in safe AI to unlock the benefits and put the UK at the forefront of progress globally. There are a number of initiatives aimed at supporting innovation including the launch of an AI and Digital Hub to support innovation run by regulators.

The use, not the technology: the UK government concern is with the risk of how AI is used, not necessarily what technology is used (although, see Future legislation, below). As a result, the government’s approach to the regulatory framework does not define AI. This is in comparison to specific UK legislation and the EU AI Act which do define AI (see our glossary of AI terms here).

Cross-sectoral: The key feature of the approach is that regulation of AI will be driven by existing regulators and shaped around the five non-binding cross-sectoral principles. . The principles are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

Existing regulators apply existing (updated) regulations: UK regulators will have responsibility for applying the principles and governing AI in their respective areas using existing statutory powers. The government has published new guidance for regulators to support them to interpret and apply the principles and have committed to spending £10 million to support regulators. Regulators have launched – and are expected to continue to launching – consultations in order to consider how their regulatory remit and role requires updating to address the opportunities and risks of AI.

Central government function: The UK government has created a central function for AI that oversees and supports the AI ecosystem in the UK. Look out for updates from central government around cross-economy risks and strategies.

AISI: In order to inform understanding of the capabilities and risks of advanced AI systems, the government announced the formation of the Artificial Intelligence Safety Institute (AISI). The AISI will conduct targeted research on new types of AI, the results of which will inform the regulatory approach moving forward.

Copyright: The Intellectual Property Office has been leading discussions on a voluntary code of practice to address the interaction between copyright ownership and AI. The response to the White Paper announced the working group have been unable to agree an effective code and responsibility for finding a solution has now been returned to DSIT to progress.

Consultation: Over the course of 2024 the UK government will scale up engagement with key stakeholders in AI to further develop domestic policy. There will be specific calls for evidence on AI-related risks to trust in information, generative AI in education and a call for views on securing AI models, with potential for a Code of Practice for cyber security of AI.    

Future legislation: Government has insisted it will not rush to legislate. However, it has recognised that voluntary measures may prove ineffective and that specific groups of technology, such as foundation models, may require specific legislation.

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David VarneyTom WhittakerLiz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).