With reports that the EU AI Act is to be adopted by EU ministers on 6 December 2022 greater focus is being placed on working out how the AI Act will apply in practice.
Open Loop ('a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies') has 'tested selected articles of the draft [AI Act] to assess, in practice, how understandable, (technically) feasible, and effective they are.' They did this with 53 AI companies who are spread through Europe, of varying sizes and with different roles in the AI lifecycle.
Here we draw out the key observations and recommendations of the report, in particular:
- determining the roles of different 'AI actors' in the AI Act is not always easy and impacts upon who is responsible and liable for what in the AI Act;
- further guidance is needed to operationalise and comply with the AI Act's risk management and data management and technical documentation requirements - without such guidance the requirements may be impractical and burdensome;
- requirements for transparency and explainability depend on who needs what information and when.
Lessons learned from reports such as Open Loop's are highly relevant right now because:
- those developing, procuring and using AI are trying to work out how to comply with the AI Act;
- will likely inform government and regulator reviews into AI-specific regulation, including: the UK government's consultation on AI-specific regulations and inquiry into AI governance; and the FCA's, PRA's and Bank of England's discussion paper on the role of AI in financial services.
What are the key observations find and what were the recommendations?
Taxonomy of AI actors (Article 3)
- It may be difficult to apply the EU AI Act in the real world:
- Participants found descriptions of AI Actors 'clear on paper' but 'in reality the roles of users and providers are not as distinct as the AIA presupposes, particularly in the context of the dynamic and intertwined relationships and practices that occur between the various actors involved in the development, deployment, and monitoring of AI systems.'
- Sometimes there are blurred lines between different AI actors. For example, there may not always be a clear distinction between a user and a provider, for example where there is co-development, and sometimes someone can be both.
- Is it clear who is responsible?
- The impact is that 'This raises questions as to who should be held responsible for the requirements in the AIA and who is responsible when these requirements are not met.'
- The report recommends: consider revising/expanding the taxonomy of AI actors in Article 3 and/or more accurately describe possible interactions between actors (e.g., co-production of AI systems and use of open-source tooling) to more accurately reflect the AI ecosystem.
Risk management (Article 9)
- Risk management is useful, whether or not it is required in the EU AI Act, but difficult to operationalise:
- participants said that they were willing to manage risks even when they are not classified as high-risk in the AIA, including undertaking risk assessments.
- however, there are challenges:
- 'it was difficult for [participants] to predict and anticipate how users or third parties would use their AI systems'
- 'participants seemed to focus more on the cause of risks (e.g., model drift and biased data) and less on the impact of these risks on natural persons (e.g., reputational damage, exclusion and discrimination).'
- The report recommends: 'Given the difficulty in assessing "reasonably foreseeable misuse" (Article 9) and the limited focus on the impact of risks, provide guidance on risks and risk assessment, in particular for startups and SMEs'
Data quality requirements (Article 10)
- The EU AI Act is prescriptive and could be unrealistic:
- 'while the data requirements listed in the AIA cover areas that are relevant to consider when developing and deploying AI systems, the absolute nature of how these requirements are phrased and how they should be met (completeness, free of errors, etc.) is highly unrealistic to achieve. The "best effort" approach that was introduced by the European Parliament (i.e., ensuring a data set is free of errors and complete to the best extent possible), is seen as an improvement'
- Guidance is needed
- 'participants underlined the importance of receiving guidance on the operationalization of these requirements'.
- 'Without further guidance, clear and objective methods, and metrics for establishing compliance with these data quality requirements, this provision in the AIA is seen as impractical'.
- The report recommends: 'Provide more concrete guidance, methodologies, and/or metrics for assessing the data quality requirements through, e.g., subordinate legislation and/or soft law instruments, standardization, or guidance from the regulator (Article 10).'
Technical documentation (Article 11)
- The EU AI Act is prescriptive, leaving little flexibility:
- the authors 'tentatively conclude that the high degree of prescriptiveness of the AIA proposal may curtail the level of discretion needed to fulfill its requirements. In fact, by listing a multitude of specific requirements, highly prescriptive laws such as the AIA often end up also requiring additional prescriptive guidance, which can make them more difficult to comply with, as there is less flexibility.'
- 'While the AIA improves legal certainty by making it more explicit what is expected of providers, it unintendedly poses additional challenges to AI companies when it comes to interpreting and complying with such legal requirements. This contrasts with non-prescriptive laws that have a high level of abstraction, where more is left to interpretation in practice (e.g., through the guidance of the regulator, creation of market standards, and/or jurisprudence).'
- Again, further guidance is needed:
- 'Given the high level of detail in the AIA, further guidance by the legislator or the regulator on describing their AI systems is desired by the participants.'
- 'Participants also noted that while the requirements are quite granular, they do not contain clear descriptions on how to document these requirements (e.g., level of detail, metrics and methodology).'
- The report recommends: 'Provide more concrete guidance, templates, and/or metrics for the technical documentation through, e.g., subordinate legislation and/or soft law instruments, standardization, or guidance from the regulator (Article 11).'
Transparency and human oversight (Articles 13 and 14)
- The EU AI Act does not fully reflect the different roles, responsibilities and requirements of those involved in the AI lifecycle:
- 'participants distinguish between the operation of an AI system and the oversight of that system. The latter requires a different level of skills, which implies that different types of information, explanations, and instructions are needed for different target groups. The participants foresee challenges when it comes to providing transparency and are also unsure how they should balance explainability and model performance. From this activity, we may conclude that the AIA would benefit from clarifications on the way in which different target audiences should be informed about the operation of the AI system.'
- the authors consider that 'The AIA does not (clearly) differentiate between these different roles. In Article 14(4), it mentions how "the individuals responsible for human oversight" should be enabled by the provider to execute their oversight, but there is no clear distinction between a person relying on the outputs of an AI system versus the person responsible for monitoring its performance. We feel that both these functions have a role to play when it comes to human oversight, but they require different types of information to enable their oversight. For instance, the operator (e.g., a doctor) might benefit more from an explanation of an individual model outcome, whereas an AI risk manager might benefit more from an explanation on the accuracy of a system, bias in the data, etc.'
- The report's recommendations include: 'Consider distinguishing more clearly between different audiences for explanations and other transparency requirements (Articles 13 and 14) in the AIA.'
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Martin Cook.
The report is: Andrade, Norberto Nuno Gomes de, and Antonella Zarra. “Artificial Intelligence Act: A Policy Prototyping Experiment: Operationalizing the Requirements for AI Systems – Part I” (2022).
The overall picture (based on the provisions from the AIA that we presented to the participants) was that for the majority of the participants the provisions in the AIA were clear and feasible and could contribute to one of the goals of the legislator: to build and deploy trustworthy AI. However, there were several areas in the AIA with room for improvement and some provisions that might even hinder the other goal of the legislator: enabling the uptake of AI in Europe.