An international Convention on AI focussed on human rights, the rule of law, and democracy is being developed by the Council of Europe (CoE), an international human rights organisation. The AI Convention, sometimes referred to as the AI Treaty, is intended to be the first legally binding international convention on AI.  EU States and other countries – including the UK and US – could be signatories.  

It comes at a time of increasing developments in AI regulation around the globe (see our horizon scanning visual here) and growing number of legal instruments defining AI terms (see our glossary here).  

Here, we provide a brief overview of the proposed Convention on AI.

 

  • Who is involved? The CoE has 46 members including the EU27, the UK, Turkey and Ukraine – Russia was expelled recently. The US, Canada, Mexico and Israel are observer countries, which are not bound by the body but that can sign treaties if they wish, such as this one on AI. Besides the CoE members and observers, the CoE Committee drafting the convention includes stakeholders from civil society and the private sector.


  • What is the Convention’s purpose? The Convention aims to strengthen the protection of individuals’ fundamental rights, such as the right to privacy and the protection of personal data. It is also seen as an opportunity to complement the European Commission’s proposed AI Act.  Signatories to the Convention would agree that AI systems which pose unacceptable risks to individuals should be prohibited, and that there should be minimum safeguards to protect individuals that may be affected by AI systems.


  • What is included in the Convention? Drafts of the Convention are not publicly available from the CoE.  However, what appears to be an early version (which is likely to have been superseded) is available online.  This could give an indication of the direction of travel.  In summary, this draft of the Convention includes:
    • Requirements for each signatory to take measures in domestic legislation to ‘give effect to the principles, rules and rights set out in the Convention’;
    • This includes ensuring that in each jurisdiction, AI systems: 
      • will not undermine fundamental rights and freedoms or rule of law; 
      • promote sustainability and environmental protection; 
      • include oversight mechanisms;
      • ensure accountability, responsibility, legal liability; 
      • ensure privacy, safety, security and robustness; 
      • include public discussion. 
      • (note: these are each expressed to apply at different stages of an AI lifecycle, some for all of the lifecycle, some for only parts. Why this is the case is unknown and whether this remains the case will be of interest)
    • Additional requirements for design, development and application of AI systems in the public sector;
    • Ensuring that there are procedural safeguards for AI systems, including: 
      • transparency to subjects impacted by AI systems including the fact they are interacting with an AI system; 
      • right to human review of decisions made by AI systems; 
      • where appropriate’ relevant explanations and justifications for decisions are offered in ‘plain, understandable, and coherent language and are tailored to the context’ with sufficient information for the subject to be able to challenge the decision
    • Use of risk assessments for AI systems to be overseen by specified national competent authorities;
    • Requirements for signatories to co-operate and, where they disagree, issue opinions and if necessary resolve their dispute through arbitration.

  • There are clear similarities but also notable differences between the Convention and AI Act.  The Convention follows a risk-based approach, potentially prohibiting AI systems posing unacceptable risks whilst placing limitations and obligations on high-risk AI systems.  However, the definitions of artificial intelligence are different – the AI Act explicitly includes generative AI (AI produced content such as images, video and audio) whereas the draft Convention does not.  It is an open question whether the two instruments are supposed to address the same AI systems and AI uses.  Without public documents explaining the drafting or the latest version, what differences there are, and the intention behind any drafting and differences, is unknown.


  • What is the scope of the Convention? In the last official meeting of the Committee, the US, with support from Canada, Japan, the UK, and Israel, made a case for limiting the scope of the Convention to public sector bodies, excluding private sector organisations.

 

  • What is the current status of the Convention? The European Commission's intention in their original proposals in August 2022 is for ‘the framework ... to be drafted by 15 November 2023 and finalised by the time the [Council of Europe’s Committee on Artificial Intelligence] is wound up in 2024’. However, we understand that the EU Commission has postponed discussions on the Treaty with a view to obtaining a mandate to negotiate on behalf of the EU. It has been suggested that the EU Commission is delaying the development of the AI Convention to ensure that the AI Act is the international standard for this emerging technology. Further discussions are expected at a board meeting in Strasbourg on 1 - 3 February 2023.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.

With thanks to Eve Hayzer for assisting with this article.