The European Parliament’s Legal Affairs (JURI) Committee, one of the 20 standing committees made up of a number of Members of the European Parliament, recently held a session discussing the EU Artificial Intelligence Act (“AI Act”). Here, we highlight key 'thinking points' discussed to give an indication of where the AI Act may change from its current draft. The session was short, so potential answers will be the subject of further debate.
For the background on the European Commission’s proposed AI Act, see our articles “Artificial intelligence - EU Commission publishes proposed regulations” and “EU Artificial Intelligence Act - what has happened so far and what to expect next”.
1. Does the AI Act address the risk of bias and discrimination in AI?
AI has the potential to bring many benefits to users and wider society. However, there is also the risk that AI causes, entrenches or amplifies bias and discrimination (as others have written about, such as the Centre for Data Ethics and Innovation).
The data sources, data quality, and choice of model are just a few causes of how an AI tool can lead to, or increase, bias in decision-making. As one member of the JURI Committee said, there is the potential for AI to become ”weapons of math destruction”.
The AI Act is an opportunity to address the risk of AI and bias. The EU AI Act says that "The obligations for ex ante testing, risk management and human oversight will also facilitate the respect of other fundamental rights by minimising the risk of erroneous or biased AI-assisted decisions."
However, whether or not those obligations go far enough to to address the risk of bias will continue to be debated.
2. Does the AI Act help make AI trustworthy?
AI tools will only be used if stakeholders, including end users, consider the tools to be trustworthy. The EU AI Act pulls at various policy levers, three of which are:
- Making application of the Act mandatory (rather than hoping for states and sectors to develop their own mandatory rules or voluntary best practice).
- The AI Act will establish an EU Artificial Intelligence Board to facilitate a harmonised implementation of the AI Act and assist Member States' supervisory bodies.
- AI at different risk levels will be subject to differing transparency obligations before it is put on the market. For example, High-risk AI will be subject risk assessments and ensuring specific information is provided to users before using the AI tool.
However, AI has ethical implications which impact on whether or not AI is viewed as trustworthy (and to be used and invested in). The EU AI Act is designed, in part, to ensure that AI is used ethically. These include the four ethical principles rooted in the EU's fundamental rights and specified by the High Level Expert Group on AI (who advised the EU Commission on its AI Strategy): respect for human autonomy; prevention of harm; fairness; explicability.
The extent to which the AI Act achieves those (and other) ethical aims was the subject of debate at the JURI committee and will undoubtedly be the matter for continued debate.
3. Does the EU AI Act impose a proportionate 'regulatory burden'?
AI operators will incur costs in complying with the EU AI Act's various obligations. These are a regulatory 'burden'. But that burden is not the same for all AI systems. High-risk AI systems have to meet greater transparency, documentation, and human-oversight obligations than non-high risk AI systems, for example. The burden of the EU AI Act's obligations needs to be proportionate to the AI system's risks.
The EU AI Act proposal includes an impact assessment of this regulatory burden:
Businesses or public authorities that develop or use AI applications that constitute a high risk for the safety or fundamental rights of citizens would have to comply with specific requirements and obligations. Compliance with these requirements would imply costs amounting to approximately EUR € 6000 to EUR € 7000 for the supply of an average high-risk AI system of around EUR € 170000 by 2025. For AI users, there would also be the annual cost for the time spent on ensuring human oversight where this is appropriate, depending on the use case. Those have been estimated at approximately EUR € 5000 to EUR € 8000 per year. Verification costs could amount to another EUR € 3000 to EUR € 7500 for suppliers of high-risk AI. Businesses or public authorities that develop or use any AI applications not classified as high risk would only have minimal obligations of information.
The EU AI Act was proposed on the basis that "The costs incurred by operators are proportionate to the objectives achieved and the economic and reputational benefits that operators can expect from this proposal."
Whether or not those costs are considered "proportionate", the extent to which this regulatory burden has any direct consequences such as investment and innovation in AI, and how the EU AI Act should be amended, will continue to be a matter for debate.
This article was written by Tom Whittaker and Kayla Urbanski.
the proposed regulatory framework on Artificial Intelligence [has] the following specific objectives: - ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; - ensure legal certainty to facilitate investment and innovation in AI; - enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; - facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX%3a52021PC0206&from=EN