The EU Committee on Culture and Education (the “Committee”) has proposed amendments to the scope of the EU Artificial Intelligence Act (the "AI Act”). Overall, the Committee welcomes the proposed AI Act but proposes amendments to extend the list of high-risk AI systems and to modify provisions for related to proposed prohibited AI systems. These proposed amendments are yet to be considered by the European Commission but provide an insight into how the AI Act may change. Here, we highlight a selection of the significant changes to come out of the Committee’s proposals.
These amendments are separate to those proposed by the EU Committee on the Regions which we wrote about separately. For a recap on the AI Act, see our articles “Artificial intelligence - EU Commission publishes proposed regulations”, “EU Artificial Intelligence Act - what has happened so far and what to expect next” and “The EU Artificial Intelligence Act - recent updates”.
Proposed additions to the AI Act are included in bold and italicised while wording proposed to be deleted appears underlined e.g. [Proposed deletion:...].
Publicly accessible spaces includes virtual spaces
As more of our life and work is conducted online - a trend likely to only continue given developments in the metaverse - it is no surprise that public spaces can be considered either physical or virtual, in either case “regardless of whether certain conditions for access may apply”.
The AI Act prohibits the use of 'real-time' biometric identification systems in publicly accessible spaces for the purposes of law enforcement unless it is strictly necessary for specific objectives (e.g. searching for victims of crime or prevention of a specific, substantial and imminent threat to life)
The Committee makes a number of proposals to this section, removing the exceptions and including the prohibition (whether with exceptions or not) to include biometric identification systems whether or not they are real-time.
The point we think is of particular interest is that the current AI Act says that use of real-time biometric information does not cover online spaces as they are not physical spaces. However, the Committee is clearly concerned that 'real-time' biometric information systems could be used in the virtual world and should be prohibited in the virtual space also (nb: there is no explanation as to why the proposed 'virtual' is preferred instead of the proposed deleted 'online').
The message is that AI can pose a risk of harm whether or not those harms are in online or virtual public spaces, regardless of whether or not there are conditions for access.
Proposed amendments to Article 5 (Prohibited AI practices)
For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical or virtual place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. [Proposed deletion: Online spaces are not covered either, as they are not physical] The same principle should apply to virtual publicly accessible spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
Harm includes economic harm
The AI Act also seeks to prohibit the placing onto the market or putting into service AI which exploits vulnerabilities of specific groups or uses subliminal techniques to distort a person's behaviour which causes harm to that person or another person. But what sorts of harms are covered?
The Committee has proposed to amend the AI Act so that harms in this instance:
- are both psychological harms and economic harms;
- explicitly can include both material and non-material harms.
Proposed amendments to Article 5 (Prohibited AI practices)
The following artificial intelligence practices shall be prohibited:
(a) the placing on the market, putting into service or use of an AI system that deploys [Proposed deletion: subliminal] techniques [Proposed deletion: beyond a person's consciousness in order to] with the effect or likely effect of materially [Proposed deletion: distort] distorting a person’s behaviour in a manner that causes or is likely to cause that person or another person material or non-material harm including physical [Proposed deletion: or], psychological or economic harm;
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a [Proposed deletion: specific group of persons] person due to their known or predicted personality or social or economic situation or due to their age, physical or mental [Proposed deletion: disability] capacity, in order to materially distort the behaviour of a person [Proposed deletion: pertaining to that group] in a manner that causes or is likely to cause that person or another person material or nonmaterial harm, including physical, psychological or economic harm;
Machine-generated news is high-risk
The AI Act identifies specific types of AI systems as high-risk. These include AI systems for the management and operation of critical infrastructure, education and vocational training, law enforcement and administration of justice and democratic processes. High-risk AI would be subject to specific obligations under the AI Act, such as being subject to appropriate human oversight and minimums of technical specification and documentation.
The Committee proposes an additional high-risk AI system: machine-generated news. The message here is that the list of high-risk AI systems is not static; the list will need to be updated over time as AI systems (and the market in which they are used) changes.
Proposed addition of machine-generated news as a high-risk AI system
AI systems used in media and culture, in particular those that create and disseminate machine-generated news articles and those that suggest or prioritize audio visual content should be considered high-risk, since those systems may influence society, spread disinformation and misinformation, have a negative impact on elections and other democratic processes and impact cultural and linguistic diversity.
Comment
The AI Act was always going to be the subject of debate and amendment. We are now seeing specific proposals made for what those amendments should be. That does not mean they will be accepted but they do give an indication of the areas of greatest risk and concern, as well as where the AI Act may not be drafted as some think needed (e.g. for precision or flexibility). In other words, watch this space.
If you would like to discuss the potential of the AI Act, please contact Tom Whittaker or Martin Cook.
This article was written by Tom Whittaker and Pooja Bokhiria.
Overall, the Rapporteur welcomes the European Commission’s proposal; however, would like to suggest a few amendments mainly to extend the list of high-risk AI applications in areas of education, media and culture under Annex III and to modify certain provisions related to banned practices under Article 5.
https://www.europarl.europa.eu/committees/en/cult/documents/latest-documents