The European Central Bank ("ECB") has published its opinion to the EU's proposed regulation of Artificial Intelligence (the "AI Act") in response to a request to do so by the Council of the EU.  This follows EU Committees (on the Regions (here) and, separately, on Culture and Education (here)) proposing specific amendments to the AI Act.  These opinions and proposed amendments are of interest to those who want an insight into how the AI Act may change.

The ECB's opinion is of particular importance to credit institutions outside the EU.  The AI Act applies to providers of AI systems who are based outside the EU but place AI systems on the market or put into service in the EU.  Also, the AI Act is likely to set standards globally for managing AI risk.  The EU AI Act and the ECB's opinion comes at a time of increased interest by the FCA and Bank of England about risks caused by AI in financial services (which we wrote about here).  So the ECB's opinion on how the AI Act works (or does not) in financial services may shape how the AI Act does affect those in financial services using AI in the future, whether or not directly affected by the AI Act.

Overall, the ECB welcomes the objectives of the AI Act. AI-enabled innovation in the banking sector is becoming increasingly important.  Those opportunities and risks are cross-border, so the ECB supports EU-wide regulations of AI.  

Here, we pick out three points to note from the ECB's opinion.

What are quality management systems for high-risk AI? 

The AI Act requires that high-risk AI systems put in place a quality management system that ensures compliance with the AI Act.  Examples of what that must include are :

techniques, procedures and systematic actions to be used for the: 

(b) design, design control and design verification of the high-risk AI system;

(c) development, quality control and quality assurance of the high-risk AI system;

(d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;

(e) technical specifications, including standards, to be applied;

(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems;

(g) the risk management system;

(i) procedures related to the reporting of serious incidents and of malfunctioning in accordance with Article 62, and (j) handling communications with relevant authorities and interested parties;

(k) systems and procedures for record keeping of all relevant documentation and information;

(m) an accountability framework setting out the responsibilities of the management and other staff with regard to all aspects listed in this paragraph.

Those obligations "shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms" pursuant to the Credit Institutions Direction (2013/36). The ECB welcomes that the AI Act tries to avoid overlap with existing legislative frameworks for credit institutions.  However, some further clarity would be helpful; the ECB wants clarifications regarding the requirements on how credit institutions outsource use of high-risk AI systems.

Is credit scoring high-risk AI?

High-risk systems under the AI Act include:

"AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use".

The ECB proposes "that AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score and which leverage on the standalone use of linear or logistic regression or decision trees under human supervision should not be classified as high-risk AI systems, provided that the impact of such approaches to the assessment of natural persons’ creditworthiness or credit score is minor."  Put another way, credit scoring where the impact is "minor" should not be classed as high-risk and subject to the increased obligations under the AI Act.

For now, the ECB proposes that entry into effect of requirements that relate to the qualification of AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score as ‘high-risk AI systems’ should be delayed until the adoption by the European Commission of common specifications.  Credit scoring AI systems which meet those specifications would be presumed to meet (but could be found not to meet) the requirements of high-risk AI systems under the AI Act (see Articles 40 and 41).

Also, the ECB welcomes the possibility to update the list of high-risk AI systems specified in the AI Act.  The ECB are of the opinion that "apart from AI systems used to evaluate the credit score or creditworthiness of natural persons, the proposed regulation does not designate as high-risk other systems that might be put into service specifically by credit institutions."  But the ECB recognises that credit institutions are developing AI systems to address risks.  Examples include "real time monitoring of payments, or profiling of clients or transactions, for anti-money laundering and counter-terrorist financing purposes."  

The implication is that additional AI systems used by credit institutions could one day be designated as high-risk, too.

Conformity assessments for credit scoring AI systems - clarification needed?

The AI Act requires high-risk AI systems, such as credit scoring, to have conformity assessments which verify that the requirements for high-risk AI systems in the AI Act have been fulfilled. AI assurance is a central part of the AI Act as it is as part of the UK's national strategy (which we wrote about here).  What conformity assessments are, how they are carried out, and who supervises them is important because it goes to the risk management and trustworthiness (and ultimately take-up) of AI systems.

The ECB highlights a few areas for clarification and further consideration for conformity assessments:

  • several parts of the conformity assessments required by the AI Act are not prudential in nature as they largely concern the technical assessment of AI systems.  To what extent should technical assessments be supervised by credit institutions?
  • is it necessary to designate relevant authorities in Member States as responsible for the supervision of conformity assessments conducted by credit institutions where matters specific to health, safety and fundamental rights are concerned? 
  • certain requirements may need clarification for effective supervision - for example, clarifying what is meant by the requirement that "training, validation and testing data sets are to be relevant, representative, free of errors and complete".

If you, too, would like clarification on the EU AI Act, or would like to discuss the potential impact of the AI Act, please contact Tom Whittaker or Martin Cook.

For a recap on the AI Act, see our articles “Artificial intelligence - EU Commission publishes proposed regulations”, “EU Artificial Intelligence Act - what has happened so far and what to expect next” and “The EU Artificial Intelligence Act - recent updates”.