On 24 October 2024 the European Commission Joint Research Centre published the briefing "Harmonised Standards for European AI Act", following the European AI Act (the "Act") coming into force on 1 August 2024.  The briefing “discusses some of the key characteristics expected [by the authors, and not necessarily the European Commission] from upcoming standards that would support the implementation of the AI Act.”

The relevant context is that high-risk AI systems will have to comply with provisions after a 2 or 3 year transition period (dependent on the type of system identified by the Annexes to the Regulation). Harmonised standards will provide support for the compliance with the Act along with ‘concrete approaches that can be adopted to meet these requirements in practice’ for high-risk AI systems under the Act.

When the European Commission requested European Standardisation Organisations standards to be developed, it explained the role of standards as:

“…important instruments to support the implementation of Union policies and legislation and to ensure a high level of protection of safety and fundamental rights for all persons in the Union. Standards can also support the establishment of equal conditions of competition and a level playing field for the design and development of AI systems, in particular for small and medium-sized enterprises that develop AI solutions.”

The briefing sets out the standardisation deliverables requested by the European Commission and what they expect to provide:

  • Risk management – specification of a risk management system for products and services using AI.
  • Data Governance and Quality - defining the data quality metrics and governance processes, with a definition of the evidence required to support these choices.
  • Record Keeping - defining the record keeping requirements in relation to the tracing and recording of events and information in AI systems. 
  • Transparency - defining all relevant transparency information required in order to support compliance with Article 13 of the Act.
  • Human Oversight - defining the clear requirements that support providers of high risk AI systems in selection, implementing and verifying the effectiveness of human oversight measures.
  • Accuracy - ensuring compliance with Article 15 of the act, the standards are expected define requirements that support providers of high-risk AI systems in the selection of relevant and effective accuracy metrics and thresholds.
  • Robustness - standardisation is expected to define requirements related to the resilience of high-risk AI systems when deployed. 
  • Cybersecurity – providing a definition for the technical and organisational measures to achieve a level of cybersecurity that is suitable to the risk of AI systems.
  • Quality Management – specifying how providers of high-risk AI systems have to establish an effective quality management system that complies with the Act. 
  • Conformity Assessment - defining the procedures and processes required to assess the conformity of high-risk AI systems with the Act before placement on the market or being put into service. 

The briefing also ‘present a series of characteristics that harmonised standards for the AI Act are expected to display, based on the analysis of the final legal text and the standardisation request.’ These include being tailored to the objectives of the Act, oriented to AI systems and products, sufficiently prescriptive and clear, applicable across sectors and systems, aligned with the state of the art, and cohesive and complementary.

As the brief explains, the main committee tasked with creating AI standards for the European Union, the Joint Technical Committee (JTC) 21 of CEN-CENELEC, has published an overview of 37 standardisation activities in support of the AI Act.  There is no set date for when standards should be completed by - whilst one would expect them to be produced in good time before transition periods end, the authors note the complexity, volume of standards and need for stakeholder engagement in order for them to be produced.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy Pegler, or Martin Cook

For the latest on AI law and regulation, see our blog and newsletter.