The FCA has published a speech by Jessica Rusu, Chief Data, Information and Intelligence Officer, given at The Alan Turing Institute’s Framework for Responsible Adoption of Artificial Intelligence in the Financial Services Industry (FAIR) event, exploring the foundations of AI regulation in financial services.

Key themes drawn out include collaboration and a clear focus on outcomes.  The FCA recognises that, for regulation to work as intended, it requires engagement with those affected by the rules and standards – in this case, financial services firms. It is also recognised that regulation should not deter innovation, but rather promote fair competition, protect consumers and promote the effective functioning of markets. 

Additional insights include:

AI usage and risks

Findings from the FCA’s recent survey with the Bank of England looking into machine learning and AI adoption in financial services show that there is broad agreement on the potential benefits of AI, with firms reporting enhanced data and analytic capabilities, operational efficiency, and better detection of fraud and money laundering as key positives.

The use of AI in financial services is also accelerating, with 72% of respondents reporting active use or development of AI applications.  This trend is expected to triple in the next three years.  Firms also reported that AI applications are now more advanced and embedded in day-to-day operations, with nearly 8 out of 10 in the later stages of development.

AI usage is not without its risks however and can pose novel challenges for firms and regulators.  In respect of consumers, data bias and data representativeness was identified as the biggest risks, while a lack of AI explainability was considered the key risk for firms themselves.

The speech further highlights research into gender gaps in AI and data science, finding that only 22% of data and AI professionals in the UK are women, citing mounting evidence that the under-representation of women and marginalised groups in AI results in a feedback loop whereby bias gets built into and becomes amplified by machine learning systems.

This leads to the question of whether AI in UK financial markets can be managed through clarifications of the existing regulatory framework, or whether a new approach is needed.

AI governance and risk management

The FCA notes that effective governance and risk management is essential across the AI lifecycle, putting in place the rules, controls, and policies for a firm’s use of AI.  Good governance is complemented by a healthy organisational culture, which helps cultivate an ethical and responsible environment at all stages of the AI lifecycle: from idea, to design, to testing and deployment, and to continuous evaluation of the model. 

The FCA considers that the Senior Managers’ and Certification Regime (SMCR) provides the right framework to respond quickly to innovations, including AI.  It creates an incentive to collect data to better measure the impact of the technology, including different demographics.  This is relevant as there is a strong link between senior managers’ objectives and diversity and inclusion outcomes.

Next steps on synthetic data

The speech also examines the role of synthetic data in testing, design, development and regulation.  Synthetic data is an increasingly important area of research for the FCA, which has been exploring the potential for synthetic data to expand data sharing opportunities in a privacy compliant manner.

There are significant challenges to accessing and sharing data however, particularly for smaller firms. To tackle this, the FCA plans to publish its feedback statement to its previous call for input on the use of Synthetic Data, which will outline the potential role of a regulator in the space, including the need for guidance to build trust and promote responsible adoption of the technology.

The FCA also plans to publish a Call for Interest in a Synthetic Data Expert Group (SDEG) in the coming weeks.  The expert panel will be hosted and chaired by the FCA, comprising of experts from industry, regulators and academia, and will run for up to 24 months.  Its purpose will be to:

  • clarify key issues in the theory and practice of synthetic data in UK financial markets;
  • identify best practice as relevant to UK financial services;
  • create an established and effective framework for collaboration across industry, regulators, academia and wider civil society on issues related to synthetic data; and
  • act as a sounding board on FCA projects involving synthetic data.