AI has transformative potential to disrupt financial services and deliver better outcomes to customers. Given the potential benefits, it’s no surprise that financial services firms are starting to implement AI into their business models and operations. However, the risks associated with AI abound. So what should financial services firms be aware of when using AI in their businesses? 

Use cases 

Some of the existing use-cases in the market include the use of AI to support: 

  • customer engagement and interaction – such as through the use of chat-bots;
  • decision making – including in credit applications and investment management;
  • efficiencies and compliance – particularly in the context of anti-money laundering and fraud detection; and
  • advice – the development of AI financial advice tools (the next generation of robo-advice). 

For each use-case, adopting AI tools allows firms to provide efficient and tailored services to their customers, however risks need to be continually assessed and reviewed and action taken where risks materialise.  Areas to consider and monitor include:

  • data risks, particularly to ensure that the underlying data used in the model is free from bias that could result in poor or even discriminatory decision-making; 
  • model risks including errors in design or construction, lack of explainability, and unintended consequences;
  • implementing appropriate oversight and governance structures;
  • information security and potential data breaches, especially when relying on third-party AI solutions; 
  • recognising at each stage of the customer support process when customers may have a particular vulnerability and when in the journey human intervention is needed. 

Relevant regulatory considerations for firms

We summarise the UK and EU approach to regulation in our AI blog. As financial services firms and fintechs start to deploy AI tools, they also need to consider the relevant, existing financial regulatory frameworks and how they apply.

Some key areas to be aware of include: 

  • Principles - The FCA has a clear principles-based and outcomes-focused approach, exemplified by the Consumer Duty which came into force in July 2023. The Customer Duty, the main principle of which is to act to deliver good outcomes for consumers, is particularly relevant to firms using AI as part of their product offering. Firms need to consider how their deployment of AI aligns with both the cross-cutting rules (in particular the need to avoid causing foreseeable harm) and the four outcomes. This is also likely to be relevant to firms providing services or products to consumer-facing regulated financial services firms as their customers seek to understand and implement compliance through their supply chains. 
  • Governance - The FCA has also highlighted that the regulatory framework focusing on governance is central to facilitating responsible implementation of AI. A speech by Jessica Rusu (FCA Chief Data, Information and Intelligence Officer) highlighted how the Senior Managers and Certification Regime “creates a system that holds senior managers ultimately accountable for the activities of their firm, and the products and services they deliver…”. Therefore, senior managers of financial services firms will need to take steps to ensure the deployment of AI is responsible and processes are adequate to avoid customer detriment and deliver good outcomes. 
  • Privacy - Given that financial services firms will often be processing personal data, firms need to consider and address how the use of AI may impact on their obligations under UK data protection law. The ICO has issued separate guidance on AI and data protection that all firms should consider.

The focus on AI and regulation will intensify in the coming years. Due to the potential wide-ranging impact of AI, we are seeing regulators across different industries work together to support businesses on AI use and development (e.g. the Digital Regulation Cooperation Forum (DRCF) including the ICO, Ofcom, FCA and the CMA). Financial services firms will need to monitor closely current and future regulatory initiatives, address any immediate deficiencies in their compliance frameworks, and respond to any future potential regulation. 

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Martin Cook or Brandon Wong. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com)

Written by Beth Jewell and Brandon Wong