As artificial intelligence (AI) technology advances at an increasingly rapid pace, so do calls for its regulation.  

This year, the UK Government has been vocal on its ambitions around mitigating the risks of AI, alongside cultivating safe AI for public benefit. 

Accordingly, the UK is hosting an AI Safety Summit on the 1st and 2nd of November at Bletchley Park. The Summit is designed to gather international Governments, leading AI companies and experts to reach a shared understanding of the risks posed by frontier AI and agree on the options for mitigating these risks. 

Ahead of the Summit, we discuss its key focus areas, as outlined in the opening day programme recently released by the Department of Science, Innovation and Technology (DSIT) on the 16th October, as well as an introduction published by DSIT on the 26th September.  

We accompany this with additional context around the UK Government’s attitude towards AI. 

AI Safety Summit

DSIT indicates that ‘Frontier AI’ will be the focus of the Summit; these are general use, highly capable systems, that reflect the most recent advances in AI. Models include cutting-edge Large Language Models (LLMs), which are deep learning algorithms trained on vast data banks, designed to mimic human intelligence. Specific examples are GPT-5, Google’s Gemini and Clause 3. The Summit will address concerns and risks around these powerful Frontier AI systems due to their significant risk potential. 

In particular, DSIT states:

The Summit will focus on certain types of AI systems based on the risks they may pose. These risks could stem from the most potentially dangerous capabilities of AI, which we understand to be both at the ‘frontier’ of general-purpose AI, as well as in some cases specific narrow AI which can hold potentially dangerous capabilities.

The Government has set out five key ambitions for the summit: 

  1. Understand the risks posed by Frontier AI, and the need for action;
  2. Agree deeper international collaboration for Frontier AI safety;
  3. Agree safety measures at an organisational level;
  4. Collaborate on AI safety research; and
  5. Showcase how safe development of AI will enable AI to be used for global good. 

Day one of the Summit, held on the 1 November, will include two roundtable discussions held by multi-disciplinary attendees. 

The first roundtable will focus on Understanding Frontier AI Risks in regard to the following categories:

  • Global Safety from Frontier AI Misuse, covering situations where a bad actor is aided by new AI capabilities in biological or cyber-attacks, development of dangerous technologies, or critical system interference;
  • Unpredictable Advances in Frontier AI Capability, discussing risks from unpredictable ‘leaps’ in frontier AI capability as models are rapidly scaled, emerging forecasting methods, and implications for future AI development, including open-source;
  • Loss of Control, covering risks that could emerge from advanced systems falling out of alignment with human values and intentions; and
  • Integration of Frontier AI, involving aspects such as election disruption, bias, crime and online safety, and exacerbation of global inequalities that might develop from the integration of Frontier AI into society.

The second roundtable will focus on Improving Frontier AI Safety, which will consider the following questions:

  • What should Frontier AI developers do to scale responsibly?
  • What should National Policymakers do in relation to the risk and opportunities of AI?
  • What should the International Community do in relation to the risk and opportunities of AI?
  • What should the Scientific Community do in relation to the risk and opportunities of AI?

Conclusions from each session will be published at the end of the summit. Day one will also involve a panel discussion on the opportunities of AI to transform education for future generations. 

Day two of the Summit will convene international governments, companies and experts to further the discussion on addressing the risks in emerging AI technology and agree next steps on how it can be utilised for public benefit. 

Additional points of interest are as follows:

  • Technology Secretary Michelle Donelan and Prime Minister’s Representatives, Matt Clifford and Jonathan Black, recently discussed the intentions behind the Summit further in a meeting at Bletchley House. Matt Clifford noted that there was particular interest in approaches that increase knowledge of AI capabilities, such as model evaluations, and the idea of ‘responsible capability scaling’. Additionally, greater investment in alignment research would be considered. 
  • The Government has indicated it will continue to engage with stakeholders in order to inform the Summit’s programme and engagement. It is partnering with a wide array of civil groups and tech bodies, including The British Academy, The Royal Society and techUK to hold a series of workshops and Q&A events in the run up to the Summit. 
  • Most recently, Michelle Donelan attended roundtables with The Alan Turing Institute, Ada Lovelace Institute, Centre for the Governance of AI (GovAI), The Centre for Long-Term Resilience, Apollo Research, techUK and the Centre for the Study of Existential Risk, University of Cambridge to discuss public trust in AI, national and international cooperation, and to hear feedback on what a good outcome for the AI Safety Summit would look like in practice. Questions and concerns from these roundtables will be brought forward to the AI Summit. 
  • Further to this, Matt Clifford and Jonathan Black recently expressed that they have begun to engage with international colleagues in order to shape the discussion of the Summit in regard to identifying and mitigating the risks of AI.
  • Finally, it is notable that criticism of the Summit has emerged over the lack of clarity in relation to intellectual property and recognition of human creative endeavour in AI systems, as well as the narrow focus of the summit (in primarily being Frontier AI). Matt Clifford has responded to the narrow focus of the Summit, stressing that companies building systems with possibly dangerous capabilities should be subject to greater scrutiny, and that AI companies that do not pose such dangerous risks should be free to innovate.   

Context: Attitude of UK Government towards AI regulation 

Broadly, the UK Government has been advocating a ‘pro-innovation’ approach to AI regulation, intending to allow these technologies to flourish in a safe manner. 

This is particularly indicated in the White Paper, which set out the Government’s proposed approach to AI regulation; we outline this further here. Additionally, see our flowchart for navigating the White Paper here.

In summary, the main regulatory objectives set out in the White Paper are to drive growth and prosperity, increase public trust in AI, and strengthen the UK’s position as a global leader in AI. The key principles of the UK’s approach are: 1) safety, security and robustness, 2) transparency and explainability, 3) fairness, 4) accountability and governance, and 5) contestability and redress. 

This is indicated in the White Paper’s proposed next steps. It poses an initial consultation period during the 6 months following publication of the White Paper; this is ongoing and an update is expected before the end of the year. The Government has been engaging with relevant stakeholders, issuing the cross-sectoral principles to regulatory bodies whilst also working with them to understand how AI characteristics should be regulated, working on publishing an AI Regulation Roadmap with plans for central functions and analysing findings from commissioned research.

Subsequently, the Government will agree partnerships with leading AI organisations, encourage key regulators to publish guidance on how the cross-sectoral principles apply within their remit, publish proposals for the design of a central M&E framework; and develop a regulatory sandbox. 

The Government has also developed a portfolio of AI assurance techniques through the Centre for Data Ethics and Innovation (CDEI), which we discuss in detail here. These techniques are designed to measure, evaluate and communicate whether an AI system meets relevant criteria to support the development of trustworthy AI. The portfolio includes case studies to illustrate how these techniques can be used in combination to promote responsible AI.

Outlook 

The upcoming Summit reflects the UK Government’s attitude to AI, in being positive towards AI development but remaining cautious towards associated risks. Accordingly, the regulatory framework being built intends that AI will be developed within a safe landscape, ensuring that the UK emerges as a leading location for AI developers. The Government has expressed that the Summit is merely a ‘first step’ in its international conversations around AI safety. 

It is notable that recently, the Government has been pushing AI companies such as OpenAI and DeepMind to reveal more information about the internal workings of their models. The extent and technical details of this information has not yet been agreed; the Government intends to reach a level of agreement prior to the Summit. 

In light of these developments, a greater level of regulation and legislation for AI than previously anticipated can be expected. 

If you would like any further information, please contact David Varney, Tom Whittaker or another member of our Technology Team

This article was written by Liz Smith and Victoria McCarron.