On 22 April 2024, the FCA, Bank of England (BoE) and Prudential Regulation Authority (PRA) (the Regulators), published their responses to the government’s White Paper on regulating the use of AI and machine learning (ML). The BoE and PRA issued a joint response and you can read our blog on that here. We consider the FCA’s position below. To cut the longer story short, the Regulators are working consistently with each other and are supportive of the government’s pro-innovation approach.

It seems that Regulators continue to anticipate no mandated or prescriptive rules based, legislative, approach to the use of AI or ML. This approach is based on the idea that it would limit the flexibility and agility of Regulators exercising their oversight and supervisory functions as they seek to provide workable solutions in this fast changing and developing area. 

Instead, the Regulators have been empowered to embrace the benefits and address the risks faced by the financial services sector in a principles driven and outcomes focused way that is more responsive and better suits the needs of the markets as a whole. The key words applicable to the general approach are ‘responsible’ and ‘safe’ such that beneficial innovation is enabled to push forward in a way that maintains financial stability, trust and confidence.  This, of course:

  • is consistent with major regulatory themes in the financial services sector of recent years; and 
  • reflects the “in practice” reality of the adoption of these technologies in recent years (for example, with clear examples of ML solutions being implemented for a good part of the past ten years by industry innovators).

The FCA published its response to the White Paper, ‘AI Update’, on its website. The FCA’s approach is aligned with that taken by the BoE and PRA: it is technology agnostic and considers that the existing regulatory framework can support AI and ML innovation in ways that will benefit the financial services sector, and the wider economy, and assess and address the risks effectively. In the same way as the BoE and PRA do, the FCA highlights the need for the regulatory response to be agile and proportionate, and rooted in the foundational principle of strong accountability.

The Regulators are all investing heavily to ensure that they can support the safe development of AI both for regulated entities and for their own use, where it has already made a transformational difference in the speed with which they can detect scams and sanctions breaches. The FCA has developed its Regulatory and Digital Sandboxes, TechSprints and a dedicated FCA digital hub staffed with data scientists. It also cooperates with regulators in other sectors; of note here is the Digital Regulation Co-operation Forum (DCRF), which is a collaboration between the FCA and the Information Commissioner’s Office, Ofcom and the Competition and Markets Authority. 

The FCA is aligned with the BoE and PRA in relation to the government’s ‘five principles’ which are key to the regulation of AI in the UK, and which are: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. We have written in more detail about these principles in our blog on the BoE and PRA response which you can read here. Some of the key FCA specific requirements, in the context of these principles, are as follows:

  • Safety, security and robustness: There is a requirement for firms to conduct their business with due skill, care and diligence, organising and controlling their affairs responsibly, effectively and with adequate risk management systems; the requirement for firms to have suitable business models and to conduct their affairs in a sound a prudent manner, and consider the interests of consumers and the integrity of the financial system; and the need for firms to ensure that they are able to respond to, recover from, learn from and prevent future operational disruptions. 
  • Transparency and explainability: There are high-level requirements which apply to the information that firms provide to customers, including requirements to act in good faith, and to communicate in a way that is clear, fair and not misleading; there are requirements under data protection legislation to process data in ways that are fair and transparent; and specific rules around automated decision making. 
  • Fairness: There are requirements for firms to comply with the Consumer Duty and deliver good outcomes for consumers, this includes: acting in good faith; not causing foreseeable harm; enabling retail customers to pursue their financial objectives, treating data correctly and ethically in line with applicable data protection requirements; and treating customers in vulnerable circumstances fairly (you can read our blog about this here).
  • Accountability and governance: There are high-level rules that require firms to have clear organisational structures with clear lines of responsibility; effective processes to identify, manage, monitor and report risks; sound internal control measures; and the Senior Managers and Certification Regime (SM&CR) requires regulated firms to ensure that one or more senior managers has overall responsibility for a firm’s main activities, business areas and management, and AI and ML would fall squarely in scope of this. 
  • Contestability and redress: There are mechanisms to ensure that firms can be held accountable for decisions or outcomes that cause consumer harm and through which consumers can obtain redress; there are requirements to maintain complaints handling procedures and to ensure that complaints made are handled fairly and promptly; and there are specific requirements relating to data subject rights to contest automated decisions. 

Over the next year the FCA will be focusing on exploring the benefits and the risks of AI and ML in the context of its statutory objectives which are to: (1) protect consumers; (2) protect and preserve the integrity of the UK’s financial system; and (3) promote effective competition in the financial services markets, by:

  • developing its understanding of AI deployment in the financial markets including monitoring the effect that it has on consumers and on the markets and ensuring that it is positioned to take effective and proportionate supervisory responses where necessary;
  • collaborating with other regulators (including domestic and overseas regulators), regulated firms, tech experts, academics, and other stakeholders to build understanding, intelligence and to support consensus going forward;
  • promoting AI in pilot environments where experts can fully explore and test technological opportunities in a safe and responsible environment using quality synthetic data where no harms can materialise;
  • ensuring the continued success and competitiveness of the UK’s financial services markets by supporting innovation and technological development; 
  • ensuring that regulated firms are resilient and can identify and mitigate the risks posed by AI and ML including new and emerging ones;
  • understanding how the speed, scale and complexity of AI may need to be addressed by management and governance within firms;
  • considering how to address issues such as testing, validation, explainability, accountability and corporate culture with agile regulation; 
  • understanding the UK’s digital infrastructure (and the wider context including, for example, cyber security, quantum computing and data); and
  • taking a proactive approach to outsourced risks within the regulatory framework including cyber security, operational resilience and outsourced risks, including the risks associated with critical third parties (we have written about the latest position on critical third parties and you can read about that here).

To conclude, the FCA is committed to continuing to develop its approach to data and technology with the aim of becoming an ‘innovate, assertive and adaptive regulator’. It is using AI to aid its development of tools that will better protect consumers and markets, using synthetic data to contribute to innovation; using ML to fight online scams; and working with surveillance specialists to develop and test AI-powered solutions that might one day be able to identify complex types of market abuse and other financial wrongs. The FCA must continue to be a technical expert on matters relating to financial services and has invested enormously in its own innovation and technology functions and skillsets as well as supporting firms to enhance innovation in the markets. The FCA is collaborating closely with other regulators both domestically and internationally to ensure consensus and alignment on best practice and on future regulatory developments.