On November 1, 2023, 28 governments (including the UK, US, France, Germany, India and China) signed the Bletchley Declaration. The Declaration, signed at the AI Safety Summit 2023, aims to coordinate global cooperation on artificial intelligence (AI) safety.

The Declaration outlines a shared understanding of the global opportunities and risks posed by AI.  It emphasises the need for collaboration by international governments in order to urgently regulate AI and ensure that the technology is developed in a safe, responsible manner. 

In line with the focus of the Safety Summit, the Declaration targets 'frontier AI', defined in the Declaration as ‘highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks’. Frontier AI poses the highest risk, and therefore was highlighted as in most urgent need of regulation. 

Key Takeaways

  • International cooperation. The Declaration emphasised above all the need for international cooperation to address the inherently international risks posed by AI. It urged countries to recognise the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI. It encouraged the development of classifications of AI risk and applicable legal frameworks appropriate to national circumstances.
  • AI Potential and Extent. The Declaration emphasised the use of AI systems across many areas of society including housing, employment, transport, education, health, accessibility, and justice. It emphasised the potential of AI to transform public services such as health and education. It also spoke of its potential in relation to food security, science, clean energy, biodiversity and climate, particularly in strengthening efforts towards the achievement of the United Nations Sustainable Development Goals.
  • Risk Identification. The Declaration set out significant risks posed by AI, including in relation to human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection. It expressed particular risks in relation to potential intentional misuse, unintended issues of control relating to alignment with human intent, cybersecurity and biotechnology. 
  • Risk Mitigation. The Declaration indicated that risk could be mitigated through systems for safety testing, evaluations and other appropriate measures. It encouraged all relevant actors to provide context-appropriate transparency and accountability on their plans for risk mitigation.

Outlook

The Declaration set out its agenda for addressing frontier AI risk, as follows:

  • Identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.
  • Building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research.

The Republic of Korea will host a mini (virtual) AI summit within the next six months. France will host the next official AI Safety Summit in 2024.

This is a landmark step in developing international cooperation on AI. It calls for accountability from those developing frontier AI capabilities and highlights the importance of the urgent need for policy development in this area. 

If you have any questions or would otherwise like to discuss any issues raised in this article, please contact David VarneyTom Whittaker or any other member in our Technology team.

This article was written by Victoria McCarron