The EU Commission has published guidelines on the definition of ‘Artificial Intelligence System’ in the AI Act. The purpose is to ‘assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules’ (here). Put another way, if a system does meet the definition of an ‘AI system’, it is not directly subject to the AI Act.
We summarise in this article the key points from the guidelines:
- the guidelines on the AI system definition are not binding and have not been adopted. They are designed to evolve over time and will be updated as necessary, in particular in light of practical experiences, new questions and use cases that arise.
- it is not possible to determine automatically or provide an exhaustive list of all potential AI systems and whether they meet the definition of AI system. However, the guidelines do list some types of systems that likely do not meet the definition of AI system (see below).
- the analysis on the differences between AI systems and general-purpose AI models is outside the scope of the Guidelines
That definition of AI system is as follows, and comprises seven main elements:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
(1) a machine-based system;
- The term ‘machine-based’ covers a wide variety of computational systems. From quantum computing to biological or organic systems, so long as they provide computational capacity.
(2) that is designed to operate with varying levels of autonomy;
- systems that are designed to operate with some reasonable degree of independence of actions fulfil the condition of autonomy in the definition of an AI system.
- systems that are designed to operate solely with full manual human involvement and intervention are excluded. Human involvement and human intervention can be either direct, e.g. through manual controls, or indirect, e.g. though automated systems-based controls which allow humans to delegate or supervise system operations.
(3) that may exhibit adaptiveness after deployment;
- the guidelines state that the use of the term ‘may’ here indicates that a system may, but does not necessarily has to, possess adaptiveness or self-learning capabilities after deployment to constitute an AI system. Therefore, adaptiveness is informative but not decisive when determining whether the system qualifies as an AI system.
(4) and that, for explicit or implicit objectives;
- the guidelines state that:
- Explicit objectives refer to clearly stated goals that are directly encoded by the developer into the system. For example, they may be specified as the optimisation of some cost function, a probability, or a cumulative reward.
- Implicit objectives refer to goals that are not explicitly stated but may be deduced from the behaviour or underlying assumptions of the system. These objectives may arise from the training data or from the interaction of the AI system with its environment.
- and, as explained in the recitals, that these objectives may be different from the intended purposes.
(5) infers, from the input it receives, how to generate outputs, (6) such as predictions, content, recommendations, or decisions
- AI systems should be distinguished from “simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations."
- the guidelines state that systems outside of the definition include: systems for improving mathematical optimization, 'basic data processing', systems based on classical heuristics, and 'simple' prediction systems.
(7) that can influence physical or virtual environments
- the guidelines state that “Reference to ‘physical or virtual environments’ indicates that the influence of an AI system may be both to tangible, physical objects (e.g. robot arm) and to virtual environments, including digital spaces, data flows, and software ecosystems.”
The guidelines state that the definition of an AI system adopts a lifecycle-based perspective encompassing two main phases: the pre-deployment or ‘building’ phase of the system and the post-deployment or ‘use’ phase of the system. Further:
the seven elements set out in that definition are not required to be present continuously throughout both phases of that lifecycle. Instead, the definition acknowledges that specific elements may appear at one phase, but may not persist across both phases. This approach to define an AI system reflects the complexity and diversity of AI systems, ensuring that the definition aligns with the AI Act's objectives by accommodating a wide range of AI systems.
Notably the guidelines include among the concluding remarks:
The vast majority of systems, even if they qualify as AI systems within the meaning of Article 3(1) AI Act, will not be subject to any regulatory requirements under the AI Act
However, the guidelines make clear that the definition of an AI system should not be applied mechanically; each system must be assessed based on its specific characteristics.
As of 2 February 2025, the first rules under the Artificial Intelligence Act (AI Act) started to apply, including the AI system definition, AI literacy, and prohibitions on unacceptable AI use cases. The EU has also produced guidelines on prohibited AI systems, which we summarise here.
To help understand whether the EU AI Act applies, check out our flowchart to help navigate key aspects of the Act (here).
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.