Written by Christopher Walker
Artificial intelligence (“AI”) and machine learning (“ML”) continue to be areas that financial regulators regard as challenging, due to increasing technological sophistication and the reliance placed on them by financial institutions.
Current use cases for AI and ML include:
1. Customer-focused applications (e.g. front-office areas such credit-scoring, calculating insurance premiums, robo-advisors and “chatbots”);
2. Operational functions (e.g. firm risk management and general market analysis);
3. Algorithmic or automated trading and portfolio management; and
4. Compliance functions (e.g. back-office functions such as KYC and AML processes, fraud monitoring and identity theft detection).
The risks in each context are different and present separate regulatory issues – while algorithmic trading may conjure thoughts of historic “flash crashes”, the potential harms in respect of the customer-orientated algorithmic risks relating to consumer credit, insurance and more are often less well-publicised and understood.
The UK Government’s Centre for Data Ethics and Innovation (“CDEI”) recently published its “Review into bias in algorithmic decision making” (the “Report”). CDEI provides advice to the UK government on the responsible use of AI and works closely with the UK’s regulatory bodies, including the Financial Conduct Authority (“FCA”).
The Report notes the following findings in respect of financial services:
· A present danger of entrenching historic biases: “In financial services, we saw a much more mature sector that has long used data to support decision-making. Finance relies on making accurate predictions about peoples’ behaviours, for example how likely they are to repay debts. However, specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems” – this is a key element as “financial organisations [on the whole] train their algorithms on historical data”. Thus the Report stresses the importance of the availability of and appropriate access to high-quality datasets;
· Differing approaches to risk: “We found financial service organisations ranged from being highly innovative to more risk-averse in their use of new algorithmic approaches. They are keen to test their systems for bias, but there are mixed views and approaches regarding how this should be done. This was particularly evident around the collection and use of protected characteristic data, and therefore organisations’ ability to monitor outcomes.” – in particular, the Report also noted that algorithmic review by humans requires “significant oversight to ensure fair operation and to effectively mitigate bias”;
· A focus on credit assessment, which is a particularly advanced and continuing : “Our main focus within financial services was on credit-scoring decisions made about individuals by traditional banks. Our work found the key obstacles to further innovation in the sector included data availability, quality and how to source data ethically, available techniques with sufficient explainability, risk averse culture, in some parts, given the impacts of the financial crisis and difficulty in gauging consumer and wider public acceptance”;
· Using explainability to build consumer trust: “Explainability of models used in financial services, in particular in customer-facing decisions, is key for organisations and regulators to identify and mitigate discriminatory outcomes and for fostering customer trust in the use of algorithms” – explainability in this context means “the ability to understand and summarise the inner workings of a model, including the factors that have gone into the model”.
What’s next?
AI and ML remain a priority for the FCA, as outlined in its 2020/21 Business Plan:
“We will deepen our engagement with industry and society on artificial intelligence, specifically machine learning, and focus on how to enable safe, appropriate and ethical use of new technologies.”
In this regard, the Report provides “Sector regulators and industry bodies should help create oversight and technical guidance for responsible bias detection and mitigation in their individual sectors, adding context-specific detail to the existing cross-cutting guidance on data protection, and any new cross-cutting guidance on the Equality Act”.
Indeed, the FCA is currently working with the Alan Turing Institute on a year-long project on AI transparency, building on a prior joint regulatory survey on machine learning in UK financial services. The findings from this project will be published in early 2021. Earlier this year, the Information Commissioner’s Office (the UK data protection regulator) and Turing Institute published their “Guide to Data Protection – Explaining decisions made with AI” – further collaborative regulatory initiatives in these areas seem likely in 2021.
The FCA has previously indicated that principles such as “transparency” and “accountability” can provide a useful existing framework for analysing the responsible use of AI and machine learning, but they will have to assess each application on a case-by-case basis, identifying the “specific harms and then determine the specific safeguards needed”. Likewise, it is worth remembering that, at a high-level, the Principles within the FCA’s Handbook state that firms must “pay due regard to the interest of [their] customers and treat them fairly” and “take reasonable care to organise and control its affairs responsibly and effectively, with adequate risk management systems”. Therefore there is an expectation that where firms use AI and machine learning, they have a clear understanding of the technology and their governance processes relating to it. Interestingly, 75% of respondents to the FCA and Bank of England’s prior survey noted they did not consider the UK financial regulatory regime an “unjustified barrier to deploying ML algorithms” – however, respondents noted that they nonetheless would “benefit from additional guidance” on how to apply existing regulations to ML.
The balance between promoting financial innovation and ensuring consumer protection will become an increasing regulatory challenge for regulators and firms alike. As the minutes to the inaugural meeting of the UK’s AI Public-Private Forum suggest, Covid-19 has “accelerated the pace of automation and adoption in AI in financial services” and firms should ensure they keep up with appropriate controls and “focus on the resilience of their AI systems in the short-term”. In the longer term, firms will need to consider “AI and data management in a more holistic manner and within the context of their wider technology infrastructures, as well as adjusting risk management processes accordingly”.
Our additional commentary on the Report’s findings as to what constitute good governance across those sectors using AI is available here.
“At a basic level, firms using this technology must keep one key question in mind, not just ‘is this legal?’ but ‘is this morally right?’” Speech: “The future of regulation: AI for consumer good”, Christopher Woolard, then Executive Director of Strategy and Competition at the FCA