The EU Committee on Industry, Research and Energy (the “Committee”) has proposed amendments to the scope of the EU Artificial Intelligence Act (the "AI Act”).  Overall, the Committee welcomes the proposed AI Act but proposes amendments to encourage innovation and refine the definition of AI.  These proposed amendments are yet to be considered by the European Commission but provide an insight into how the AI Act may change. 

Here, we highlight a selection of the significant changes to come out of the Committee’s proposals.

These amendments are separate to those proposed by the EU Committee on the Internal Market, on  the Regions , on Culture and Education, and on Legal Affairs.  For a recap on the AI Act, see our articles “Artificial intelligence - EU Commission publishes proposed regulations”, “EU Artificial Intelligence Act - what has happened so far and what to expect next” and “The EU Artificial Intelligence Act - recent updates”. 

Proposed additions to the AI Act are included in bold and italicised while wording proposed to be deleted appears underlined e.g. [Proposed deletion:...].

A globally accepted definition of AI

The EU has a goal to set global standards for regulating AI.  One way of doing so is to help establish a globally accepted definition of AI.  That doesn't mean it will be the only definition of AI, but one which is accepted as an appropriate definition for regulating AI.  The difficulty is pinning down such a definition because regulation both tries to be specific enough that it can be understood and applied, whilst flexible enough to be 'future proof' and apply when the technology evolves.

There is debate as to the appropriate definition.  This is in part because it directly affects which sorts of AI Systems do and do not get caught by the Act.  As Euractiv reported recently:

"MEP Benifei is proposing a broad definition and deleting the list of AI techniques and approaches in Annex I to make the regulation future-proof.

By contrast, the centre-right European People’s Party insists on the definition agreed upon at the OECD level. The EPP also introduced a definition of machine learning as the ability to find patterns without being explicitly programmed for a specific task."

The definition is likely to be subject to significant debate and change further.  Where it ends up is impossible to predict with certainty.  In the meantime, here is the Committee's proposal for changing the definition of AI in the Act.  One point to note is that, similar to other proposed amendments to the Act, the Committee proposes to make sure the Act applies in the virtual, as well as real, world.

Proposed amendments to the definition of AI

(1) ‘artificial intelligence system’ (AI system) means [Proposed deletion: software that is developed with one or more of the techniques and approaches listed in Annex I and can,] a machine-based system that can, with varying levels of autonomy, for a given set of human-defined objectives, [Proposed deletion: generate outputs such as] make predictions, content, recommendations, or decisions influencing real or virtual environments they interact with;  

(1a) ‘autonomy’ means that an AI system operates by interpreting certain input and by using a set of pre-determined objectives, without being limited to such instructions, despite the system’s behaviour being constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its developer;

Meeting the obligations - realistic standards?

AI Systems which are classed as high-risk under the Act will result in various obligations imposed.  Those obligations have been the subject of debate, in part, to ensure that appropriate and not unrealistic standards are met.  The draft Act currently includes an obligation that high-risk AI systems have data sets which are 'free of errors' but it has been questioned whether that is possible, let alone appropriate.  As a result, the Committee has proposed to amend some of the standards to what they consider more realistic.

Proposed amendments to Article 10 para 1 (Data and Data governance)

1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, assessment, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is feasible from a technical point of view while taking into account the latest state-of-the-art measures, according to the specific market segment or scope of application.

1a. Techniques such as unsupervised learning and reinforcement learning, that do not use validation and testing data sets, shall be developed on the basis of training data sets that meet the quality criteria referred to in paragraphs 2 to 5. 

1b. Providers of high-risk AI systems that utilise data collected and/or managed by third parties may rely on representations from those third parties with regard to quality criteria referred to in paragraph 2, points (a), (b) and (c) 


3. Training, validation and testing data sets [Proposed deletion: shall be] are designed with the best possible efforts to ensure that they are relevant, representative, [Proposed deletion: free of errors and complete] and appropriately vetted for errors in view of the intended purpose of the AI system. In particular They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof. 

EU setting global standards

The European Union has an objective to be "a global leader in the development of secure, trustworthy and ethical artificial intelligence".   The Committee, and other EU committees and bodies, support that objective.  

The Act is designed to help achieve that objective, in part, by requiring high-risk AI systems to be subject to specific obligations.  Those high-risk AI systems are to be added to an EU database of high-risk systems.  Currently, the Act envisages providers of high-risk AI systems who place that AI system into the EU market will register that AI system on the database.  

The Committee believes that the Act can go further to achieve the objective of setting global standards.  One example is to amend the Act so that providers of high-risk AI systems who put an AI system onto a market outside of the EU can register it on the EU database provided it meets all obligations of the Act's obligations.  The Committee appears to think that AI providers globally will want to comply with the AI Act voluntarily and prove this by registering the AI system with the EU database.

Proposed change to Article 51 (registering high-risk AI systems on the EU database)

 A high-risk AI system designed, developed, trained, validate, tested or approved to be placed on the market or put into service, outside the Union, can be registered in the EU database referred to in Article 60 and placed on the market or put into service in the Union only if it is proven that at all the stages of its design, development, training, validation, testing or approval, all the obligations required from such AI systems in the Union have been met. 

If you would like to discuss the potential of the AI Act, please contact Tom Whittaker or Martin Cook