Nearly one year after the European Commission published the draft Regulation of Artificial Intelligence ("the AI Act"), further amendments have been proposed in the draft report of the Committees on the Internal Market and Consumer Protection (IMCO), and on Civil Liberties, Justice and Home Affairs (LIBE). This follows reports of the EU Committees (on the Regions (here), on Culture and Education (here)) and an opinion of the European Central Bank (here) proposing amendments to the AI Act. These proposed amendments are of interest to those who want an insight into how the AI Act may change.
This article looks at the following proposed amendments by IMCO and LIBE:
- Changing the definition of AI - should AI Systems subject to the AI Act be limited to AI Systems with human-defined objectives?
- Risk Management Systems for High-Risk AI Systems - clarifying who is at risk but potential uncertainty for credit institutions;
- Additional High-Risk AI Systems to be subject to the AI Act's obligations - health and life insurance, deepfakes, machine-generated news, medical and emergency triage systems;
- Enforcement powers for the Commission and potential additional fines for infringement with the AI Act.
Proposed additions to the AI Act are included in bold and italicised whilst wording proposed to be deleted appears underlined e.g. [Proposed deletion:...].
Defining AI - whose objectives are they anyway?
How AI is defined is important. The AI Act recognises that "The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments".
There is no universally accepted definition of AI. However, definitions often refer to an AI system which is designed to achieve specified objectives. Those objectives may be set by a human. For example, the person using the AI system (the 'deployer' in the AI Act) may specify that they want the AI system to generate a prediction based on a dataset - think of an AI system predicting whether a borrower will default on their overdraft based on their financial history. The current draft of the AI Act defines AI by reference to human-defined objectives, as does the US National Artificial Intelligence Act of 2000.
However, what if an AI System's objectives were set by AI, such as another AI system? It may be more difficult to envisage how such an AI System would operate and the risks it poses. But should it fall outside of the prohibitions and obligations of the AI Act? Other definitions of AI do not refer to who makes the objectives. For example, the OECD's definition of AI simply refers to 'a given set of objectives'.
The IMCO and LIBE Committees propose to remove reference to 'human-defined' objectives but do not explain why in the draft report.
Proposed amendment to article 3 - Definition of AI
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with...'
Risk Management Systems for High-Risk AI Systems
Who is at risk?
The AI Act permits High-Risk AI Systems subject to specific requirements. One of those is that a "risk management system shall be established, implemented, documented and maintained".
The AI Act specifies the steps required of the risk management system including: identification of known and foreseeable risks; evaluating risks of reasonably foreseeable misuse; and adoption of suitable risk management measures.
However, who is at risk from the High-Risk AI System that the AI Act is trying to protect? The IMCO and LIBE Committees propose to clarify this which, in turn, gives further clarity over what a compliant risk management system would include.
Proposed amendment to article 9 - High-Risk AI Risk Management Systems
(a) identification and analysis of the known and foreseeable risks associated with each and the reasonably foreseeable risks associated with each that the high-risk AI system can pose to:
(i) the health or safety of natural persons;
(ii) the legal rights or legal status of natural persons;
(iii) the fundamental rights of natural persons;
(iv) the equal access to services and opportunities of natural persons;
(v) the Union values enshrined in Article 2 TEU [Which is: The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the Member States in a society in which pluralism, non-discrimination, tolerance, justice, solidarity and equality between women and men prevail].
Credit institutions
The risk management systems for High-Risk AI Systems appears to be in addition to other obligations upon the credit institution developers or users. The AI Act requires quality management systems and monitoring for High-Risk AI Systems but that such obligations are deemed to be fulfilled by credit institutions which comply with Directive 2013/36/EU. However, no similar deemed fulfilment is in place for risk management systems for High-Risk AI Systems.
The European Central Bank welcomed the AI Act trying to avoid overlap with existing legislative frameworks for credit institutions (when the ECB published its opinion on the AI Act as we wrote about here). It's unclear whether potentially additional risk management systems for credit institutions' high-risk AI systems is intended overlap or not.
Database of public bodies using high-risk AI
The AI Act seeks a publicly accessible EU database of high-risk AI Systems. Those AI Systems must be registered before being placed on the market or putting into service by the provider. A provider means a 'natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge'.
A provider is different to a 'user' who is 'any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity'. So the current AI Act means that public bodies who used a High-Risk AI system provided by a third-party would not have to separately register on the EU database.
The IMCO and LIBE Committees propose a greater degree of transparency - public bodies and EU institutions using High-Risk AI Systems should register that use on the EU database. This reflects an ongoing discussion about what good governance looks like for AI Systems used by public bodies (we wrote here about the UK's proposed algorithmic transparency standard for public bodies, and here about what algorithmic transparency in the public sector looks like).
Proposed addition to article 51 - Registering High-Risk AI Systems
Before putting into service or using a high-risk AI system in accordance with Article 6(2), users who are public authorities or Union institutions, bodies, offices or agencies or users acting on their behalf shall register in the EU database referred to in Article 60.
Additional High-Risk AI Systems
The AI Act specifies certain types of AI Systems as High-Risk. IMCO and LIBE Committees propose that the following are added:
- Health and life insurance - 'AI systems intended to be used for making decisions or assisting in making decisions on the eligibility of natural persons for health and life insurance;'
- Medical and emergency triage systems - 'AI systems intended to be used to evaluate and classify emergency calls by natural persons or to dispatch, or to establish priority in the dispatching of emergency first response services, including by police and law enforcement, firefighters and medical aid, as well as of emergency healthcare patient triage systems';
- Machine-generated news - 'AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human-generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles'. Also, such systems 'shall disclose that the text content has been artificially generated or manipulated, including to the natural persons who are exposed to the content, each time they are exposed, in a clear and intelligible manner.' (The EU Committee on Culture and Education also proposed including machine-generated news as a High-Risk AI System (as we wrote about here)).
- Deepfakes - 'AI systems intended to be used to generate or manipulate audio or video content that appreciably resembles existing natural persons, in a manner that significantly distorts or fabricates the original situation, meaning, content, or context and would falsely appear to a person to be authentic.'
Enforcement by the EU Commission and additional penalties
The AI Act envisages that Member States will designate a national supervisory authority to enforce the AI Act. Penalties can be sizeable; use of prohibited AI can result in fines of up to €30m or, for companies, up to 6% of worldwide annual turnover for the preceding financial year, whichever is higher.
However, recognising the potential for infringements taking place across multiple Member States or the possibility of national authorities not bringing enforcement proceedings, the IMCO and LIBE Committees propose that the Commission can enforce the AI Act (in summary):
- acting upon the AI Act Board's recommendation or its own initiative;
- where: there are sufficient reasons to believe widespread infringement of the AI Act; the infringement affects or is likely to affect 45m+ EU citizens; the infringement is in two or more Member States where national authorities have not taken any action;
- the Commission then takes forward the enforcement - the relevant national authority or authorities are no longer entitled to but will still co-operate with the Commission;
- the Commission has wide-ranging investigation and enforcement powers: access to documents and data; power to require information from users; ability to carry out unannounced on-site and remote inspections; ability to conduct interviews;
- the Commission can order interim measures and impose penalties;
- the Commission can impose fines 'not exceeding 2% of the total turnover in the preceding financial year, where the operator intentionally or negligently': fails to provide information required by the Commission or rectify incorrect, incomplete or misleading information provided; refuses to permit an on-site or remote inspection.
If you would like to discuss the potential of the AI Act, please contact Tom Whittaker or Martin Cook.
The co-rapporteurs want to emphasize, together, that the goal of the AI Act is to ensure both the protection of health, safety, fundamental rights, and Union values and, at the same time, the uptake of AI throughout the Union, a more integrated digital single market, and a legislative environment suited for entrepreneurship and innovation. This spirit has guided and will continue to guide their work on this Regulation.
https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=CELEX%3a52021PC0206