On 17 May 2024 the Council of Europe adopted the “first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems”. The convention will be open for signature by the EU and countries on and from 5 September 2024. 46 Council of Europe member states, the EU and 11 non-member states (including the USA) participated in drafting the convention; whether they will sign it and become a ratified Party remains to be seen.
Here we summarise the key points.
The aim of the Convention is:
to ensure that the potential of artificial intelligence technologies to promote human prosperity, individual and societal wellbeing and to make our world more productive, innovative and secure, is harnessed in a responsible manner that respects, protects and fulfils the shared values of the Parties and is respectful of human rights, democracy and the rule of law.
The Convention “focuses on the protection and furtherance of human rights, democracy and the rule of law, and does not expressly regulate the economic and market aspects of artificial intelligence systems.”
How is AI defined?
“artificial intelligence system” means a machine-based system that for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.” (emphasis added)
This matches the revised OECD definition of AI. It is largely the same as the EU AI Act definition of AI, albeit in a slightly different order and the EU AI Act refers to “can” rather than “may”.
When does the Convention apply?
The convention applies to:
- The activities within the lifecycle of AI systems undertaken by public authorities, or private actors acting on their behalf;
- The activities within the AI lifecycle by private actors in a manner confirming with the object and purpose of the Convention.
What are the exceptions?
The Convention does not apply to AI used for the protection of national security, research and development, or national defence (Article 3):
A Party shall not be required to apply this Convention to activities within the lifecycle of artificial intelligence systems related to the protection of its national security interests, with the understanding that such activities are conducted in a manner consistent with applicable international law, including international human rights law obligations, and with respect for its democratic institutions and processes.
Without prejudice to Article 13 and Article 25, paragraph 2, this Convention shall not apply to research and development activities regarding artificial intelligence systems not yet made available for use, unless testing or similar activities are undertaken in such a way that they have the potential to interfere with human rights, democracy and the rule of law.
Matters relating to national defence do not fall within the scope of this Convention.
What are the obligations?
Each Party shall adopt or maintain measures to ensure that the activities within the lifecycle of artificial intelligence systems are consistent with obligations to protect human rights, as enshrined in applicable international law and in its domestic law.
Further, each Party shall adopt or maintain measures that seek to
- "ensure that artificial intelligence systems are not used to undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice."
- “protect its democratic processes in the context of activities within the lifecycle of artificial intelligence systems, including individuals’ fair access to and participation in public debate, as well as their ability to freely form opinions.”
Also, Parties are required to also implement principles in their domestic legal system:
- Human dignity and individual autonomy
- Transparency and oversight
- Accountability and responsibility
- Equality and non-discrimination
- Privacy and personal data protection
- Reliability
- Safe innovation
What measures must each Party take?
Each Party shall:
to the extent remedies are required by its international obligations and consistent with its domestic legal system, adopt or maintain measures to ensure the availability of accessible and effective remedies for violations of human rights resulting from the activities within the lifecycle of artificial intelligence systems.
To support the above, each Party shall adopt or maintain measures including:
measures to ensure that relevant information regarding artificial intelligence systems which have the potential to significantly affect human rights and their relevant usage is documented, provided to bodies authorised to access that information and, where appropriate and applicable, made available or communicated to affected persons;
measures to ensure that the information referred to in subparagraph a is sufficient for the affected persons to contest the decision(s) made or substantially informed by the use of the system, and, where relevant and appropriate, the use of the system itself; and
an effective possibility for persons concerned to lodge a complaint to competent authorities.
Further, there are transparency requirements for users interacting with AI systems:
Each Party shall seek to ensure that, as appropriate for the context, persons interacting with artificial intelligence systems are notified that they are interacting with such systems rather than with a human.
Parties must monitor risks. Each Party is required to have measures to identify, assess, prevent and mitigate risks posed by AI, with specific criteria for those measures to meet, and so that adverse impacts to human rights, democracy, and the rule of law are “adequately addressed”.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler or any other member in our Technology team.
This article was written by Charlotte Teece and Nathan Gevao.