The FCA and Bank of England have published a report on the conclusions of the Artificial Intelligence Public-Private Forum's (AIPPF) findings about the challenges and risks of the use of artificial intelligence (AI) in financial services, to advance the collective understanding of the use of AI in financial services, as well as promote further debate among academics, practitioners, and regulators about how best to support safe adoption of this technology.

The AIPPF sought to:

  • Share information and understand the practical challenges of using AI within financial services, as well as the barriers to deployment and potential risks. 
  • Gather views on potential areas where principles, guidance or good practice examples could be useful in supporting safe adoption of these technologies.
  • Consider whether ongoing industry input could be useful and what form this could take.

Data, Model Risk and Governance

The AIPPF report considers three topics: data; model risk; governance.

This article looks at governance. But first, a quick couple of points on Data and Model Risk:

  • Data - as the report notes, 'AI begins with Data'.  The benefits and risks associated with AI tools can be traced back to the data that is used by AI tools.  The recent growth in AI has been partly driven by the increase in the data available.  Those AI tools are able to process large volumes of unstructured data from various sources.  The changing role of data in the AI lifecycle raises novel and complex governance issues.
  • Model Risk - risks related to the use of AI in financial services is not new. What is changing is the scale at which AI is being used, and the speed and complexity at which AI systems operate.  These create new challenges as well as amplifying existing ones.  The changing use of AI raises governance issues.

The need for good governance

There are various governance issues associated with procuring, developing and deploying AI, and approaches to governance is naturally the subject of much scrutiny.  Governance is a standalone issue in the UK National AI Strategy (which we wrote about here) whilst there is also debate on what good governance requires (in this article we wrote about recommendations for the UK's Office for AI's white paper on the UK Government's approach to AI regulation due early 2022). 

Good governance is needed to ensure the safe adoption of AI in financial services.  Risks of AI usage need to be identified and managed, which is especially important given the novel and dynamic application of AI.  In a practical sense, challenges may arise when AI systems don't naturally align with internal operational or product governance functions (so making it difficult to have clearly defined lines of accountability and co-ordination).  Further, and given the variety of potential uses of AI and contextual differences, a single, common approach is unlikely to be suitable for all financial services firms; what is good governance will depend on each firm and each AI system.   

Whilst financial services firms are well used to identifying and managing regulatory, conduct/customer and operational risk - and adopting appropriate governance to manage such risks - there are some differences to bear in mind between AI management and other topics.   

The key findings that firms should bear in mind when considering governance are:

  • AI systems are distinct from other types of process-based decision-making tools because of their capacity for autonomous decision-making.  Specific challenges arise how such autonomous decisions are governed, including who should be held accountable and how.
  • Existing governance frameworks are likely to be relevant but are usually not designed for the specific challenges of AI.  For example in financial services, there is the data governance and model risk management frameworks as well as operational risk management.  These can be leveraged and adapted to manage AI but may require a more dynamic approach to risk management and new ways of tracking and assessing outcomes against expectation.
  • Governance must be tailored to the AI system and use-case.  Key elements will include transparency and communication. 
  • Firms should appoint a central body to set and AI governance standards, including specific requirements for various use cases.  Overall responsibility for AI could be held by one or more senior managers, with business areas being accountable for outputs, compliance and execution.
  • Governance is more effective when it includes diversity of skills and perspective, covering the full range of functions and business units. (The importance of diversity is a recurring theme in AI strategy, not just governance, for example as part of the UK National AI Strategy goal for 'Investing in the long-term needs of the AI ecosystem').

What happens next?

Discussion about the safe adoption of AI has only just begun.  But the report expects:

  • industry bodies for AI practitioners, including voluntary codes of conduct;
  • regulatory support for innovation and adoption of AI.  For example, by providing clarity on how existing regulation applies to AI;
  • regulatory 'alignment', with co-ordination between regulators, both domestically and internationally, 'to keep all parts of the global AI ecosystem in check'.

A quick disclaimer: the report's conclusions are based on views of individuals who are from, but not speaking on behalf of, various regulatory bodies and financial services firms. Nevertheless, the report is useful and worth a read to understand the key issues of deploying AI safely in financial services, and provides a level of detail that a summary like this cannot.

If you want to discuss the topics raised here please contact Tom Whittaker, Martin Cook or your usual Burges Salmon contact.