The world's first legislative framework for artificial intelligence systems took another step closer to entering into force.  The next step is for the EU to finalise and approve the text, after which transition periods will begin.  Here we pick out the key points for the AI Act which the EU sees as setting a global standard, as they argue did the GDPR.

Definition of AI

The EU has adopted the OECD's updated definition of AI, helping improve international alignment and intended to distinguish AI from simpler software systems:

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.

However, some have noted that this definition may not clearly distinguish AI from software, so there remains legal risk of a system being classed as an AI system.

Scope

The provisional agreement also clarifies that the AI Act does not apply to:

  • areas outside the scope of EU law and should not affect members states' competencies in national security;
  • AI systems used exclusively for military or defence purposes;
  • AI systems used solely for the purpose of research and innovation;
  • AI systems used by persons for non-professional reasons.

Further refinement has been introduced so that the AI Act aligns with existing sectoral and data protection legislation.

Risk classification

The provisional agreement maintains the risk-based system - prohibited AI systems, high-risk AI Systems (facing tougher obligations) and low risk AI systems (with some transparency requirements) .

It clarifies that prohibited AI systems includes those used for cognitive behavioural manipulation, untargeted scrapping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, and some cases of predictive police.

The requirements for high-risk AI systems are supposed to be more technically feasible and less burdensome, and now include a mandatory fundamental rights impact assessment (amongst others) applicable also to the insurance and banking sectors.  Citizens will have a right to complain about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.

General purpose AI systems and foundation models

There are new provisions  for AI systems which can be used for many different purposes - known as general purpose AI - and where that general purpose AI is integrated into another high-risk system.

There are also specific rules for foundation models - large systems capable to competently perform a wide range of tasks such as generating text, video, images and code.  Transparency obligations are imposed upon any foundational model before it is placed on the market.  Providers must implement a policy that respects EU copyright law and summarise content used for training an AI model.

There will be a stricter regime for ‘high impact’ foundation models, including around risk management (including model evaluations), red teaming (documenting simulated adversarial testing), cybersecurity and energy consumption reporting.

Penalties

The penalties for non-compliance remain high: €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement includes more proportionate caps on fines for small and medium enterprises and start-ups.

Further, the provisional agreement now makes clear that natural or legal persons may make a complaint to the relevant market surveillance authority concerning alleged non-compliance with the AI Act.

Next steps and implementation period

Work will now continue at a technical level to finalise the text before being submitted for formal adoption. That can be expected in early 2024.

Once approved and published:

  • the AI Act enters force 20 days after publication;
  • prohibitions apply 6 months after publication;
  • rules on general purpose AI systems, high-risk AI systems, and EU governance of AI apply 12 months after publication;
  • and most other provisions apply 24 months after publication.

Many will recall experiences of preparing for the introduction of the GDPR - the time, work and cost involved.  Potentially for some, preparing for the AI Act will take longer, given the complexities of the Act and of AI systems, including how they are integrated within an organisation and potentially its business model.  Organisations can expect to see technical and legal guidance published, and developments in (already drafted) model clauses for complying with high-risk AI system requirements. 

In parallel with the EU AI Act's development, other jurisdictions are pushing ahead with their approach. In the UK, the government pushes ahead with the steps in the White Paper (see our flowchart for navigating it here) and the momentum gained from the UK's AI Safety Summit, whilst there are proposals in Parliament for  AI Regulations.  In the USA, there is now implementation of the Biden-Harris Executive Order on AI and continued enactment of, and proposals for, AI regulation at state level.  There are moves to increase international alignment and interoperability of AI regulation, however, each jurisdiction is likely to take a different approach as they approach AI regulation through their own lens and to meet their specific needs. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David VarneyLucy Pegler, Martin Cook or any other member in our Technology team.