The FCA has published a speech given by its Chief Executive on the emerging regulatory approach to Big Tech and AI.

Artificial intelligence

Notable points in Nikhil Rathi’s speech on AI include:

  • that AI can both benefit markets (e.g. improving financial models, tackling the advice gap, hyper-personalising products and services) but also cause imbalances and risks that affect the integrity, price discovery and transparency and fairness of markets if unleashed unfettered.
  • a number of examples of the risks that AI can pose to financial markets, including misinformation on social media, generative AI, deep fake video scams and data bias. Other related risks include cyber fraud, cyber attacks and identity fraud. The FCA confirms that it intends to take a robust line on fraud prevention and operational and cyber resilience.
  • a focus on explainability of AI models.
  • the impacts of AI (and tech enablement more generally) on the investment management sector, particularly given competition and cost pressures.
  • the establishment of the FCA’s Digital Sandbox using real transaction, social media, and other synthetic data to support fintech and other innovations to develop safely. The FCA also employs AI internally, using AI methods for firm segmentation, monitoring of portfolios and identifying risky behaviours.
  • the extension of the FCA’s global techsprint approach to include AI risks and innovation opportunities.

Another theme emerging from the speech concerned accountability (i.e. where responsibility lies – with users, firms or AI developers?).

The FCA recognises that regulation must be proportionate enough to foster beneficial innovation but robust enough to avoid a race to the bottom and a loss in trust and confidence. It calls on firms to engage with its upcoming AI Sandbox to address these issues, particularly given the increase in AI-based business models both from new entrants and existing firms.

It notes that the Consumer Duty and SMCR regimes provide existing frameworks to address issues that come with AI, but that future debates, such as around the extent to which there should be a bespoke SMCR-type regime for the most senior individuals managing AI system, will also inform AI regulation to come.

Big Tech

The FCA has also published its feedback statement on the competition impacts of Big Tech in Financial Services, including a call for further input on the role of Big Tech firms as gatekeepers of data and the implications of the ensuing data-sharing asymmetry between Big Tech firms and financial services firms. We will be sharing further thoughts on the feedback statement in due course.

The FCA is further considering the risks that Big Tech may pose to operational resilience in payments, retail services and financial infrastructure. It recognises that partnerships with Big Tech can offer opportunities – particularly by increasing competition for customers and stimulating innovation – but further testing is needed to determine whether the entrenched power of Big Tech could also introduce risks to market functioning.

Separately, and alongside the PRA, the FCA will also regulate Critical Third Parties, setting standards for their services (including AI) and resilience (highlighting that as of 2020, nearly two thirds of UK firms used the same few cloud service providers).

Given current market trends, we expect to see a continued regulatory focus on AI and Big Tech in financial services.