The European Committee on Legal Affairs has proposed amendments to the EU AI Act. Here, we highlight some of the Committee’s key proposals.  These are in addition to the proposed amendments made by the committee earlier in the year and reflect the further refinement of, and debate around, the AI Act.

Here, we highlight a selection of the significant changes to come out of the Committee’s proposals.  Debates within the EU about the text of the EU AI Act will be of particular interest to industry stakeholders and observers who want to understand the direction of travel, and also to those awaiting the UK's white paper on AI-specific regulation (expected late 2022).

Proposed additions to the AI Act are included in bold and italicised while wording proposed to be deleted appears underlined e.g. [Proposed deletion:...]

Specifying general principles applicable to all AI systems

The AI Act seeks to place obligations on those using and deploying high-risk AI systems (whilst prohibiting some AI systems which are fundamentally at odds with EU values and rights).  However, that does not mean that other AI systems have free-reign.  The Committee seeks to ensure harmonised standards for AI systems, proposing that:

  • there is a voluntary code of conduct for AI systems which are not high-risk;
  • that all AI systems (high risk or not, excluding those prohibited) are developed and deployed based on defined general principles; and 
  • there is further guidance on those principles from the European Commission, EU AI Board and European Standardisation Organisations.

It is unclear what impact a voluntary code of conduct will have.  The proposed amendments refer to it being 'strongly encouraged' - not enforced.  However, we expect that it will be of benefit to industry to have guidance as to how AI systems should be developed and deployed, in particular for those developing AI systems that risk becoming 'high-risk' (depending on their change of use and/or a change in what it is included as 'high-risk' under the AI Act).

The general principles provide guidance to industry as to standards for AI systems.  However, that guidance is limited. First, the definitions are high-level.  How they apply will still depend on their context.  Second, their precise wording risks inconsistencies with where a principle is used elsewhere in the AI Act, within the EU and in other jurisdictions (see below).  Do such inconsistencies exist and, if so, is that a matter of language or substance?

The UK policy for AI-specific regulation is proposing cross-sectoral principles.  Whether or not there are parallels to be drawn between the UK's and EU's principles and how they are implemented will depend on how they appear in the UK's anticipated white paper on AI-specific regulation in late 2022 and how the EU's AI Act develops.  

Proposed general principles

1. All AI operators shall respect the following general principles that establish a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded: 

  • 'human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
  • ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of unintended problems and being resilient against attempts to alter the use or performance of the AI system so as to allow unlawful use by malicious third parties.
  • ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity.
  • ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights.
  • ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy. 

2. Paragraph 1 is without prejudice to obligations set up by existing Union and national law. For high-risk AI systems, the general principles are translated into and complied with by providers or users by means of the requirements set out in Articles 8 to 15 of this Regulation. For all other AI systems, the voluntary application on the basis of harmonised standards, technical specifications and codes of conduct as referred to in Article 69 is strongly encouraged with a view to fulfilling the principles listed in paragraph 1. 

3. The Commission and the Board shall issue recommendations that help guiding providers and users on how to develop and use AI systems in accordance with the general principles. European Standardisation Organisations shall take the general principles referred to in paragraph 1 into account as outcome-based objectives when developing the appropriate harmonised standards for high risk AI systems as referred to in Article 40(2b).

Potential inconsistencies?

Terms do not always need to be used consistently. Sometimes they have to flex depending on their context, such as the purpose of the AI Act article.  However, any inconsistencies will be carefully scrutinised by stakeholders to understand how a term applies in a specific context.  

Take this as an example - the meaning of 'transparency'.

Proposed general principles (see above) included:

  • ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system and affected persons about their rights.

To what extent is this consistent with the requirement  for high-risk AI systems for there to be 'Transparency and provision of information to users' (Article 13)?  That requirement does not refer to 'traceability'.  

'Traceability' appears in 'Record keeping' (Article 12): high-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events ('logs') whilst the high-risk AI systems is [sic] operating'; 'The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system.'  The article then sets out the minimum logging capabilities.  These are different to the information required under article 13.

Whether this has any practical impact is yet to be seen as it will depend on the final text of the Act and how it is applied on a case-by-case basis.

Voluntary codes of conduct for non-high risk AI systems

Industry has for a number of years been developing voluntary codes of conduct for the application of AI Ethics (for example, see this AI Ethics study).  They have come under some criticism for, amongst other things, failing to provide mechanisms for redress.  However, the Committee clearly thinks that voluntary codes of conduct have use for non-high risk systems and so seeks to 'encourage' them on a 'voluntary' basis.

(Some) Proposed voluntary code of conduct amendments

2. Codes of conduct intended to foster the voluntary compliance with the principles underpinning trustworthy AI: systems, shall, in particular:

(a) aim for a sufficient level of AI literacy among their staff and other persons dealing with the operation and use of AI systems in order to observe such principles;

(b) assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with disabilities or whether measures could be put in place in order to increase accessibility, or otherwise support such persons or groups of persons;

(c) consider the way in which the use of their AI systems may have an impact or can increase diversity, gender balance and equality;

(d) have regard to whether their AI systems can be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities;

(e) reflect on the need and relevance of having in place diverse development teams in view of securing an inclusive design of their systems;

(f) give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes;

(g) evaluate how AI systems can contribute to environmental sustainability and in particular to the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles.

3. Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders, including scientific researchers, and their representative organisations, in particular trade unions, and consumer organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems. Providers adopting codes of conduct will designate at least one natural person responsible for internal monitoring.


If you would like to discuss the potential of the AI Act, please contact Tom Whittaker or Martin Cook.  Harvey Spencer contributed to this article.