The UN Security Council - the UN body responsible for the maintenance of international peace and security - held its first discussion of artificial intelligence. The UK's foreign secretary, James Cleverley outlined the UK's position:

  • AI promises opportunities on a global scale; discoveries in medicine, productivity boosts, tackling climate change, to name a few;
  • AI will also affect the work of the Security Council: it could enchance or disrupt global strategic stability; challenge assumptions about defence and deterrence; increase the speed, scale and spread of disinformation;
  • So global governance is urgently needed for 'transformative technologies' including AI.
  • The UK's visioin is founded on four principles:
  1. open: AI should support freedom and democracy

  2. responsible: AI should be consistent with the rule of law and human rights

  3. secure: AI should be safe and predictable by design; safeguarding property rights, privacy and national security

  4. resilient: AI should be trusted by the public and critical systems must be protected

  • the UK's approach builds on existing multilateral intiatives (such as the AI for Good Summit in Geneva, or the work of UNESCO, the OECD and the G20) and work with partners,  like the Global Partnership for AI, the G7’s Hiroshima Process, and the Council of Europe.
  • government and industry will need to work together

Comment

AI-specific regulations are coming, both at state/regional-level, country-level and at international-level (see our horizon scan here).  

There are reasons to expect similar approaches.  There will be international co-operation: the UK is positioning itself as a leader internationally for responsible AI (the UK plans to bring world leaders together for the first major global summit on AI safety) and the UK White Paper on AI regulation sets out how the UK will work with international partners (see our flowchart for navigating the UK's position here). The Foreign Secretary's speech adds to the calls for international co-operation as well as the demonstrating the UK's determination to lead in international governance of AI. 

However, there are also reasons to think that different regulations will diverge.  The UK White Paper intentionally takes a different approach to the EU; those calling for regulations recognise that regulations are often context-specific, meaning that regulations will reflect their local context either in how they are drafted and enforced; regulations (or lack of them) may be seen as a way to boost innovation, so some jurisdictions may favour a lighter-touch regulatory regime for AI depending on where they see the opportunities and risks.

Monitoring how different jurisdictions approach AI regulations is useful.  For example, it indicates to what extent jurisdictions will seek to converge or diverge from other regulatory regimes; because many organisations (e.g. AI providers and deployers) operate across jurisdictions, there is a risk of having to operate under multiple different regulatory regimes.  It also helps identify what principles for AI governance are important - such as those identified by the Foreign Secretary - and also how they translate into practice, both in terms of what regulators and government require, and also what organisations do in their AI governance.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker,  Brian Wong, or any other member in our Technology team.