The European Committee of the Regions (the “ECR”) has published proposed amendments to the EU Artificial Intelligence Act (the “AI Act”). These proposed amendments are yet to be considered by the European Commission but provide an insight into how the AI Act may change. Here, we highlight a selection of the significant changes to come out of the ECR's proposals.
These amendments are separate to those proposed by the EU Committee on Culture and Education which we wrote about separately. For a recap on the AI Act, see our articles “Artificial intelligence - EU Commission publishes proposed regulations”, “EU Artificial Intelligence Act - what has happened so far and what to expect next” and “The EU Artificial Intelligence Act - recent updates”.
Proposed additions to the AI Act are included in bold and italicised.
What is AI? Is the definition appropriate?
How the AI Act defines AI is important. What the definition is will ultimately shape the scope of the Act, and what and who the AI Act does (and does not) affect.
The AI Act proposes a "single future-proof definition of AI" to help achieve a uniform application of the AI Act. Stakeholders who took part in consultations on how the AI Act should be drafted requested a "narrow, clear and precise definition of AI".
The ECR considers that the definition of AI can be expanded and improved:
- it should be clear that the AI Act's list of AI techniques and approaches is non-exhaustive and should be "regularly updated" (potentially allowing regulators or courts to expand the definition);
- AI is not simply about techniques and approaches - it is also part of social practices, identity and culture;
- algorithms developed by other algorithms should be subject to the AI Act.
Proposed amendment to Article 5 (Prohibited AI practices)
‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed (non-exhaustively) in Annex I, combined with social practices, identity and culture, and that can, for a given set of human-defined objectives, by observing its environment through collecting data, interpreting the collected structured or unstructured data, managing knowledge, or processing the information derived from these data, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Annex I - AI techniques and approaches
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
Harm includes economic harm... for some high-risk AI
The AI Act also seeks to prohibit the placing onto the market or putting into service AI which exploits vulnerabilities of specific groups or uses subliminal techniques to distort a person's behaviour which causes harm to that person or another person. But what sorts of harms are covered?
The ECR proposes to broaden the types of harm that risk being caused by high-risk AI systems that use subliminal techniques to distort a person's behaviour. We wrote recently about how the EU Committee on Culture and Education (the "ECCE") has also proposed amendments to the scope of the "AI Act”. There are notable differences in approach between the ECCE and ECR:
- the ECR proposes to prohibit AI systems which use subliminal techniques that has or is likely to have a detrimental effect on (specifically) consumers, including (but not limited to) "monetary loss or economic discrimination". In contrast, the ECCE proposed amendments 1) refer instead to "economic harm" (not monetary loss or economic discrimination), and 2) without limiting it to consumers (i.e. such high-risk AI systems would be prohibited for other groups if they cause or are likely to cause economic harm).
- the ECCE proposals to address the risks of monetary loss or economic discrimination is for high-risk AI systems which use subliminal techniques (Article 5(1)(a)) but not those which exploit the vulnerabilities of specific groups of persons to materially distort their behaviour (Article 5(1)(b)). Why the proposed amendment is for one but not the other is unexplained. In contrast, the ECR proposes to include "economic harm" as part of both types of high-risk AI to be prohibited.
Proposed amendment to Article 5 (Prohibited AI practices)
The following artificial intelligence practices shall be prohibited:
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm, infringes or is likely to infringe the fundamental rights of another person or a group of persons, including their physical or psychological health and safety, has or is likely to have a detrimental effect on consumers, including monetary loss or economic discrimination, or undermines or is likely to undermine democracy and the rule of law;
Human intervention of High-Risk AI is (sometimes) required by government
The AI Act requires human oversight of High-Risk AI systems: "High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use."
However, the ECR is concerned that sometimes oversight is not enough. Some decisions which could be made solely by High-Risk AI should require human intervention. Two of those High-Risk AI systems are those used by public bodies:
5.Access to and enjoyment of essential private services and public services and benefits:
(a)AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b)AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use
Proposed addition to Article 14 (Human Oversight of High-Risk AI)
Any decision taken by AI systems as referred to in Annex III(5) (a) and (b) shall be subject to human intervention and shall be based on a diligent decision- making process. Human involvement in these decisions shall be guaranteed.
This article is not machine-generated. It had human intervention. Whilst we would like this piece to be totally original, our comment on the EU Committee on Culture and Education's proposed amendments to the AI Act is also applicable here:
The AI Act was always going to be the subject of debate and amendment. We are now seeing specific proposals made for what those amendments should be. That does not mean they will be accepted but they do give an indication of the areas of greatest risk and concern, as well as where the AI Act may not be drafted as some think needed (e.g. for precision or flexibility). In other words, watch this space.
This article was written by Tom Whittaker and Pooja Bokhiria.
the Commission’s goal of making the EU a global leader in the responsible and human-centred development of AI can only be achieved if local and regional authorities have a significant role. Local and regional authorities are best placed to help create an environment propitious to boosting investment in AI in the coming years and fostering trust in AI