The EU AI Act is the world’s first comprehensive law on artificial intelligence. Provisionally agreed upon by European Member States in December 8 2023, and European Parliament in February 2024, the AI Act should be formally adopted by the European Parliament and Council in Spring 2024. Below we summarise the key parts of the Act including:

  • who and what is covered;
  • the risk-based approach including prohibitions of certain artificial intelligence practices and specific requirements for high-risk AI systems;
  • rules for the placing on the market of general-purpose AI models;
  • consequences for non-compliance;
  • what happens next.

If you're looking to navigate the EU AI Act, also see our flowchart here.

Who does the AI Act apply to?

The AI Act applies to a range of stakeholders involved in the development, deployment and use of AI systems in the EU. Two key actors will be the ‘Provider’ and ‘Deployer’ of the AI system, the latter replacing ‘User’ in the final text. 

A provider is :

‘a natural or legal person, public authority, agency or other body that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.’

A deployer is :

‘any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity’.

However, any natural or legal person that imports, distributes or operates an AI system will be expected to comply with the obligations contained in the AI Act. 

Does the AI Act apply to those outside the EU?

Yes. The Act extends to providers and deployers of AI systems that are located outside the EU, but whose systems affect EU citizens, entities or the EU market. 

The Act also applies where the AI System is neither placed on the market, nor put into service, nor used in the EU. For example,  where an operator established in the EU contracts certain services to an operator outside the EU in relation to an activity to be performed by an AI system that qualifies as high-risk.  As the Act puts it: 'To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and deployers of AI systems that are established in a third country, to the extent the output produced by those systems is intended to be used in the Union.'

What systems are covered?

The AI Act covers “AI systems” defined as:

“a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The AI Act adopts the definition developed by the OECD to support a global consensus around the types of systems that are intended to be regulated as AI.  As the Act puts its: “The notion of AI system in this Regulation should be clearly defined and closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, facilitate international convergence and wide acceptance, while providing the flexibility to accommodate the rapid technological developments in this field. Moreover, it should be based on key characteristics of artificial intelligence systems, that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations”. The Act recognises the need to define AI systems based on specific criteria, including: capability to infer; ability to influence environments; that they are machine-based; they act with various levels of autonomy; they have varying degrees of adaptivity.

Further guidance is expected to help understand more precisely what systems do and do not fall within the definition.

What systems are excluded?

The AI Act does not apply to:

  • areas outside the scope of EU law and should not affect members states' competencies in national security;
  • AI systems used exclusively for military or defence purposes;
  • AI systems used solely for the purpose of research and innovation; or
  • AI systems used by persons for non-professional reasons.

Risk-based approach

The AI Act establishes obligations for AI, based on its potential risks and level of impact on individuals and society as a whole. AI systems are divided into systems of limited risk, those posing high risk and those that are prohibited. AI systems of limited risk will only be subject to transparency requirements.

1. Prohibited AI Practices 

The AI Act prohibits certain AI practices which are harmful and abusive identifying uses which contradict, amongst other things, human dignity, freedom and equality.

Consequently, it is prohibited to place on the market, put into services or use in the EU:

  • Remote biometric identification systems in publicly accessible spaces for law enforcement purposes.
  • AI systems providing social scoring of natural persons.
  • AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
  • AI systems with the objective to or the effect of materially distorting human behaviour, whereby significant harms, in particular having sufficiently important adverse impacts on physical, psychological health or financial interests are likely to occur.

The final negotiations included lengthy discussion on whether the AI Act should contain certain exceptions for the use of AI in remote biometric identification. The final text includes limited exceptions for using remote biometric identification systems in publicly accessible spaces for law enforcement purposes insofar it involves either targeted searches for abduction victims, prevention of threat to life or physical safety, or prosecution of serious crime.

2. High Risk

High risk systems will fall into one of two categories:

  1. AI systems which are safety components of devices or are themselves devices covered by EU product safety legislation.
  2. AI systems used in any of the following high-level areas:
  • biometrics;
  • critical infrastructure;
  • educational and vocational training;
  • employment, workers management and access to self-employment;
  • access to and enjoyment of essential private services and essential public services and benefits;
  • law enforcement, insofar as their use is permitted under relevant Union or national law;
  • migration, asylum and border control management, insofar as their use is permitted under relevant Union or national law;
  • administration of justice and democratic processes.

The list above is not intended to be exhaustive and is subject to change.

A significant change in the final text means an AI system will not be considered high risk if it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons. By default, an AI system shall not be considered high risk if it meets certain conditions as follows:

(a)  the AI system is intended to perform a narrow procedural task;

(b) the AI system is intended to improve the result of a previously completed human  activity;

(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

The AI Act contains a criterion for assessing whether an AI system poses a significant risk of harm including an assessment of the intended purpose of the AI system and the nature and amount of data processed. 

For AI systems classified as high risk, strict obligations will apply. Chapter 2 sets out detailed requirements for high-risk systems:

Risk Assessment: According to Article 9 of the AI Act, providers of high-risk AI systems must implement and document a risk management system that covers the entire life cycle of the system. The risk management system must include:

  • An analysis of the known and potential risks associated with the system, taking into account its intended purpose.
    • An evaluation of the risks associated with the system, both when it is used for its intended purpose and when it is misused or subject to reasonably foreseeable errors alongside other possibly arising risks based on data analysis.
    • The adoption of risk management measures that are proportionate to the level and nature of the risks identified.

The risk management system must be regularly reviewed and updated to reflect any changes in the system, its use, or its environment. The provider must also keep records of the risk management activities and make them available to the competent authorities upon request. It is expected that high risk systems will be rigorously tested to identify the most appropriate risk management strategies.

Data and Data Governance: Article 10 sets out the data governance and management practices that are required for high-risk AI systems. Such practices include evaluating data sets to identify possible biases and spotting data gaps or shortcomings. Training, validation and testing datasets should also be representative and take into account certain specific factors relevant to the geographical, contextual, behavioural or functional setting within which the high-risk system will be utilised.

Technical documentation: Article 11 requires detailed technical documentation to be produced and kept up to date with a view to demonstrating the high-risk AI system complies with the requirements set out in the AI Act.

Record Keeping: Pursuant to Article 12, high risk AI systems must allow for the automatic recording of events over the duration of the system’s lifecycle. This is to facilitate identification of situations that may result in the AI system presenting a risk and to enable deployers to effectively monitor the AI system in accordance with Article 29.

Transparency: Article 13 requires high risk systems to be designed with transparency in mind and to enable deployers to interpret the system’s output accurately. Detailed instruction should be included with the AI system containing the identity and contact details of the provider, along with information explaining the system’s characteristics, capabilities and limitations of performance of the high-risk system.

Human oversight: Under Article 14 high risk systems must be designed and developed in such a way that they can be ‘effectively overseen by natural persons during the period in which the AI system is in use’. Human oversight is an important governance tool and is meant to prevent or minimise risks to health, safety or fundamental rights.

Accuracy, robustness and cybersecurity: Article 15 addresses performance concerns and stipulates that AI systems should be designed to ensure consistent performance in relation to accuracy, robustness and cyber security.

Articles 16 – 30 set out further specific obligations for Providers, Deployers and other parties including requirements for a quality management system (Article 17), documentation keeping (Article 18) and corrective actions (Article 21). Article 29 introduces the requirement for a fundamental rights impact assessment before a high-risk AI system can be introduced to the market.

3. Low risk AI systems

AI systems which aren't prohibited or high-risk are subject to limited rules, primarily around transparency. These include (subject to exceptions):

  • Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use.
  • Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.

General purpose AI systems and foundation models

One of the most hotly debated areas during the negotiations focussed on how to regulate general purpose AI (GPAI) models. A GPAI is defined as an:

‘AI system which is based on a general-purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’

The final text of the AI Act contains specific provisions for GPAI models including further obligations on providers of GPAI models with systemic risk.

GPAI Models

Those that provide GPAI models are subject to specific transparency requirements in Article 52. The key point is where an AI system engage with natural persons, the natural persons must be informed they are interacting with AI. Providers should ‘ensure the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated’. Additionally, Article 52c makes it clear Providers of GPAI models must maintain technical documentation and make information available to Providers who intend to use the GPAI model. Article 52c recommends use of a code of practice to demonstrate compliance with these provisions.

Systemic Risk

Providers of GPAI models with systemic risk will have additional obligation: to perform model evaluations including documenting adversarial testing, assess and mitigate systemic risks, track and report incidents to the AI Office and ensure cybersecurity protection (Article 52d). A Code of practice is also recommended.

What are the consequences for non-compliance?

The penalties for non-compliance remain high: €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the final text maintains more proportionate caps on fines for small and medium enterprises and start-ups.

The European Commission has launched the European Artificial Intelligence Office which will primarily support national authorities’ enforcement of AI rules.

What happens next? 

The text has been approved by Member States and the European Parliament sub-committee responsible for the Act. It awaits a formal adoption in an upcoming Parliament plenary session and final Council endorsement.

The AI Act will enter into force 20 days after its publication in the Official Journal of the EU.  That is anticipated for April/May 2024.

However, not all provisions of the AI Act will apply immediately. The AI Act provides for a transitional period of 24 months during which organisations must implement the requirements. This means that the EU AI Act should apply in full force from 2026. Importantly though, Article 85 lists certain exceptions to the above and some parts of the AI Act will apply earlier:

  • 6 months for prohibited AI systems.
  • 12 months for the rules on general purpose AI systems. 24 months for providers of GPAI models that were already on the market before the entering into application of the AI Act.

The adoption of the AI Act will be a landmark moment globally for the regulation of artificial intelligence. Given the comprehensive nature of the requirements and penalties for non-compliance significant resources will be needed to ensure compliance. For many the work will be as, if not more, significant as preparing for the introduction of the GDPR.  However proactive engagement and strategic planning as early as possible will assist the transition to the new regulatory era.

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David Varney, Tom Whittaker, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).

This article was written by Sam Efiong.