On 26 October, the PRA and the FCA published a feedback statement summarising the responses to their joint discussion paper (DP5/22) on the use of artificial intelligence (AI) and machine learning (ML) in financial services.
The regulators include an early disclaimer that the feedback statement does not include policy proposals, nor does it signal how they will go about implementing any future proposals in this area.
The statement identifies key themes from the responses to DP5/22 received from various stakeholders including industry bodies, banks and technology providers.
The key points raised include:
- A regulatory definition of AI would not be useful. Many respondents pointed to the use of alternative, principles-based or risk-based approaches to the definition of AI with a focus on specific characteristics of AI or risks posed or amplified by AI.
- Given the rapidly evolving nature of AI, it would be helpful for regulators to provide ‘live’ regulatory guidance i.e. periodically updated guidance and examples of best practice.
- Ongoing industry engagement is important. Initiatives such as the AI Public Private Forum have been useful and could serve as templates for ongoing public-private engagement.
- Respondents considered that the regulatory landscape is complex and fragmented with respect to AI. More coordination and alignment between regulators, domestic and international, would therefore be helpful.
- Most respondents said that data regulation, in particular, is fragmented, and that more regulatory alignment would be useful in addressing data risks, especially those related to fairness, bias, and management of protected characteristics.
- A key focus of regulation should be consumer outcomes and protection, given the risks of bias, transparency and exploitation of vulnerable consumers.
- Increasing use of third-party models and data is a concern and an area where more regulatory guidance would be helpful. Respondents suggested that third-party providers of AI solutions should provide evidence supporting the responsible development, independent validation, and ongoing governance of their AI products. Respondents also noted the relevance of a previous discussion paper on operational resilience in the context of third parties (DP3/22).
- AI systems can be complex and involve many areas across a firm. Therefore, a joined-up approach across business units and functions could be helpful to mitigate AI risks. In particular, closer collaboration between data management and model risk management teams would be beneficial.
- Respondents said that existing firm governance structures (and regulatory frameworks such as the Senior Managers and Certification Regime (SM&CR)) are sufficient to address AI risks.
Related blog posts include our update on the current approach to AI regulation in financial services and our coverage of UK Finance's response to the Government White Paper on AI Regulation. We will continue to provide updates on AI-related developments in financial services regulation.
Written by Harvey Spencer, Trainee Solicitor at Burges Salmon.