The UK, US and the EU, amongst others, have become signatories to the Council of Europe Framework Convention on Artificial Intelligence (AI Convention), the “first legally binding international treaty aiming to ensure that AI systems are developed and utilised in ways that respect human rights, democracy and the rule of law”. Each signatory is known as a Party.
The AI Convention seeks to uphold the ethical development and regulation of AI. According to the Council of Europe’s explanatory report, it aims to provide “a common legal framework at the global level in order to apply the existing international and domestic legal obligations that are applicable to each Party.”
According to UK government (here), “Once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced”.
Here we summarise the key points.
What is the Convention’s objective?
The Convention aims
to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law
Each Party shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention. What those measures look like will reflect the severity and probability of occurrence of adverse impacts on “impacts on human rights, democracy and the rule of law throughout the lifecycle of artificial intelligence systems”.
How is AI defined?
“artificial intelligence system” means
a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.
The definition is intentionally drawn from the OECD’s definition of AI, which is also the definition used in the EU AI Act (see our flowchart for navigating the EU AI Act here). Further, according to the Conventions’ explanatory note, it is meant to “ensure legal precision and certainty, while also remaining sufficiently abstract and flexible to stay valid despite future technological developments”.
Scope of the Convention
The Convention covers the activities within the AI lifecycle that have the potential to interfere with human rights, democracy and the rule of law.
The Convention applies to:
- the activities in the AI lifecycle undertaken by public authorities or private actors acting on their behalf.
- private actors in a manner which conforms with the object and purpose of the Convention. What that looks like in practice will depend on what each Party specifies.
The Convention does not apply to:
- AI systems related to protection of national security interests, to the extent they conform to international law, human rights law and with respect to democratic institutions and processes;
- research and development not yet made available for use;
- matters relating to national defence.
Key obligations of the AI convention
The AI Convention sets out seven principles to ensure AI is developed and implemented ethically:
- Protection of human rights as enshrined in international and domestic law.
- Integrity of democratic processes and respect for the rule of law - they should be protected and should not be undermined by AI systems.
- Human dignity and individual autonomy: measures should be adopted to maintain these. Individuals should not be reduced to data points. The explanatory report also highlights “the capacity of artificial intelligence systems for imitation and manipulation”.
- Transparency and oversight: measures should be adopted or maintained to ensure adequate transparently and oversight tailored to the specific contexts and risks. AI decision-making processes and training methodologies should be accessible to stakeholders (especially in sensitive sectors including finance and healthcare) to allow decisions to be contested. AI-generated content should be identified (e.g. via labelling or watermarking).
- Accountability and responsibility: each Party should adopt or maintain measures to ensure accountability and responsibility for adverse impacts on human rights, democracy and the rule of law resulting from activities within the lifecycle of artificial intelligence systems.
- Equality and non-discrimination: measures should be adopted or maintained so that AI systems respect equality and prohibit discrimination. Further, each “Party undertakes to adopt or maintain measures aimed at overcoming inequalities to achieve fair, just and equitable outcomes, in line with its applicable domestic and international human rights obligations, in relation to activities within the lifecycle of artificial intelligence systems”. AI bias should be prevented, including training data biases, confirmation biases and social biases. Each Party is required to proactively correct for structural and historic inequalities.
- Privacy and personal data protection: privacy rights should be protected, and effective guarantees and safeguards in place. The protection of personal data and the continued ethical and legal commitment to protecting data subjects’ privacy in line with existing domestic and international law is reiterated.
- Reliability: this should be promoted. Justifiable trust should be created through standards, technical specifications, quality assurance, transparency and end-to-end accountability.
- Safe innovation: this should be enabled. Innovation should be supported by supervised AI testing sandboxes, which simulate real-world environments and identify privacy, security and safety risks early.
The Convention also sets obligations for Parties to adopt or maintain measures which so that remedies are available.
- Remedies: Authorities should be provided with adequate documentation. Clear complaints procedures should be available.
- Procedural safeguards: AI systems should have appropriate human oversight, (e.g. ex post reviews). Users should be notified when interacting with AI systems.
Further, the Convention requires each Party to adopt or maintain risk and impact management frameworks: The “‘severity’, ‘probability’, duration and reversibility of risks and impacts” should be evaluated, mitigated, and documented on an ongoing basis. Authorities can introduce different risk classifications or implement an outright ban.
More broadly, the AI Convention requires its implementation to be non-discriminatory, to respect the rights of persons with disabilities and of children, undergo public consultation, promote digital literacy and safeguard human rights.
The Convention is principles-based. That provides flexibility to accommodate the various legal systems and laws of the signatory Parties. However, it also means that what each principle looks like in practice may differ between jurisdictions.
Next steps
The next step is for signatories to ratify the Convention. The Convention enters force three months after the date on which five signatories have expressed their consent to be bound by the Convention. For any subsequent Party that ratifies the Convention, it applies three months after that ratification is deposited to the Secretary General of the Council of Europe.
Afterwards, there are reporting and oversight mechanisms to monitor compliance. Whether and to what extent information about such compliance will be made public is not known.
The Convention was signed by the UK, US, the EU, Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino.
However, the Convention may have broader application. States, public sector and private sector organisations may look to it as a potential framework or set of principles to guide responsible AI. The 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay) negotiated the treaty. Representatives of the private sector, civil society and academia contributed as observers.
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David Varney, Tom Whittaker, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).
This blog was written by Jenora Vaswani