The BoE and PRA (the Regulators) have recently provided a joint response (the Response) by way of a letter written to the government in response to its White Paper detailing their strategic approach to AI and machine learning (ML). The key words for the Regulators are ‘safe’ and ‘responsible’ and, in a nutshell, this is because of the risks that they perceive AI and ML could pose to financial stability.
There are some important themes in the Response, and these include:
- Safe, responsible and within remit: The BoE and PRA are working to deliver safe and responsible AI and ML within their regulatory remit i.e. in a way that is consistent with their statutory objectives. The primary statutory objectives of the BoE are to maintain monetary and financial stability, and for the PRA, to promote the safety and soundness of regulated firms. Both Regulators have secondary objectives which include supporting economic policy, economic growth, and the international competitiveness of the UK.
- An ongoing project: The BoE and PRA (also the FCA (but its response was separate and is the subject of a separate blog)) have been examining the adoption of these technologies over the past few years and will continue to do so as the technologies evolve, which they are doing, at pace. There are already many examples of AI and ML being adopted by financial services including in fraud and money laundering detection, in credit decisioning, in customer engagement and in data and analytics but there is more work to do as the capabilities of the technology evolves and as cases of its adoption in financial services increase.
- Opportunities: Al and ML have potential to enhance the delivery of financial services both within the UK and globally. These technologies could make financial services more accessible and more efficient; they could bring benefits to consumers, firms and the markets. The Regulators want to deliver a regulatory framework that enables the benefits that AI and ML can bring.
- Risks: The Regulators are technology-agnostic but cautious relative to risks that could adversely affect their statutory objectives. Addressing these risks is central to the safe and effective delivery of the opportunities on offer.
- The ‘five principles’: the approach being taken by the Regulators is broadly consistent with the government’s ‘five principles’ which are: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress.
- Collaboration: regulatory collaboration in the AI space will be critical to ensuring a consistent approach and the Regulators have been working with the FCA and with the Digital Regulation Cooperation Forum (DRCF) which includes the Information Commissioner’s Office, Ofcom and the Competition and Markets Authority.
The ‘five principles’ merit more detailed analysis:
- Safety, security and robustness: AI systems should function in a robust, secure and safe way through their lifecycle. Risks should be continually identified, addressed, and managed. There are specific risks to the financial services markets from outsourcing and third parties, and this will include those parties that provide the underlying infrastructures on which AI applications are developed and deployed. Regulated firms are expected to identify and manage these risks. The Regulators (together with the FCA) have developed a coordinated regulatory and supervisory infrastructure to strengthen the operational resilience of the UK’s financial services sector.
- Transparency and explainability: A lack of transparency and explainability poses risk to the financial system. Fairness and transparency are important concepts in data processing and need to apply to AI and ML particularly given that these technologies are complex and challenge established concepts of transparency and explainability.
- Fairness: AI and ML should not undermine legal rights, discriminate, or produce unfair outcomes.
- Accountability and governance: Good governance will need to underpin the safe and responsible adoption of AI and ML. There must be effective oversight of the supply and use of AI systems and clear lines of accountability across the AI lifecycle. The Senior Managers and Certification Regime (SM&CR) requires regulated firms to ensure that one or more senior managers has overall responsibility for a firm’s main activities, business areas and management. AI and ML would fall squarely in scope of SM&CR. Similar requirements are echoed in other regulatory requirements including the PRA Rulebook’s ‘General Organisation Requirements’, ‘Risk Control’ provisions, and ‘Model Risk Management’ principles.
- Contestability and redress: Persons impacted by harmful AI outcomes should be able to contest those. Broadly, individuals are protected from automated processing which can have a legal or otherwise significant impact on them.
Against the backdrop of the need for the Regulators to maintain financial stability, they have taken and continue to take an approach that is pro-innovation with the aim of ensuring that the regulation of AI and ML in the financial services space can be proportionate, trusted, agile, clear and collaborative. This is a rapidly developing area. There is rapid pace in the innovation both of the technology and in the use that is being made of that evolving technology. The Regulators will continue their important work in this area including identifying beneficial use cases, running pilot projects, analysis, sharing expertise and ensuring expertise, establishing guardrails, and releasing updated guidance as needed to assist regulated firms in their understanding of how to apply existing regulatory rules to novel technologies and to address how regulatory expectations might evolve around developing technologies.
A final note on operational resilience and the regulatory response to the systemic risks posed by critical third parties (CTPs), the Regulators (together with the FCA) have collaborated in their response to this, you can read our post on it here. The proposed CTP regime could apply to the provision of AI and ML technologies should a situation arise where CTP providers of AI services to the financial sector emerge and are designated as CTPs by HM Treasury.
For more news and insight in to financial services regulation, subscribe to our monthly newsletter here.
“We have engaged extensively – and will continue to do so – with the tech sector, academia, and financial services firms to keep up with the rapid pace of technological change. Thus far, we have been able to meet our statutory objectives while supporting the safe and responsible adoption of AI/ML in financial services. Given the rapid pace of innovation and the evolution of use cases, we will keep our approach under continuous review.”