The Bank of England (Bank) and FCA have just released the results of a joint survey into the use of artificial intelligence (AI) and machine learning (ML) in the UK's financial services industry. This survey is the latest of many over the years via which the Bank and the FCA have worked to develop their insight into the use of AI being made across the sector.
What do the results of the survey tell us?
In super-summary, the survey results tell us that there has been an uptick in the use of AI throughout the industry and that it looks like this trend will remain upward for the next few years.
There has been an increase in third-party exposure, with a third of all AI implementations being third-party implementations. Again, it looks like this trend will remain upward as the complexity of models increases.
It appears that over half of AI use cases involve “some degree of automated decision-making” but that only a very small percentage involve fully autonomous decision-making.
In terms of understanding, around half of the use cases suggest a “partial understanding of the AI technologies they use”. Specifically in relation to third-party implemented models, firms noted a “lack of complete understanding compared to models developed internally”.
In terms of risk -v- benefit analysis, the greatest benefits of using AI were considered to be “data and analytical insights, anti-money laundering…and combating fraud, and cybersecurity”, with “operational efficiency, productivity and cost base” being cited as those areas with the greatest expected increase in benefits over the next three years. In terms of risk, the most commonly perceived risks relate to data. The risks that are considered most likely to increase over the next three years are “third-party dependencies, model complexity, and embedded or ‘hidden’ models”.
Of the risks noted, cybersecurity and dependency on critical third-parties, are considered to present significant systemic risks to the sector.
The results of the survey refer to both "regulatory" and “non-regulatory” constraints on the use of AI. Constraints referred to include data protection and privacy, resilience, the Consumer Duty, the safety and security of the models, and lack of expertise.
In terms of governance, many firms reported “that their executive leadership were accountable” for their use of AI.
Why the survey?
As the use of AI across the industry increases, it is important for the Bank and the FCA to “maintain an understanding of the capabilities, development, deployment and use of AI”, including of the relative benefits and challenges that AI presents to firms, to consumers and to the financial system as a whole.
Demographics
The survey included respondent firms from the banking (both UK and international banks), insurance, investment and capital markets, non-bank lending and financial markets infrastructure (FMI) sectors of the industry.
The highest level of use of AI seems to be within the insurance sector (closely followed by international banks) and the lowest level of use appears among FMIs.
Use cases
The most likely business areas in which firms are using AI appear to be operations and IT, followed by retail banking, and then general insurance. The highest percentages of use cases of foundation models (i.e. machine or deep learning models) were found in legal teams, human resources and research functions.
The most likely type of use cases are “optimisation of internal processes”, cybersecurity and fraud detection, with advancements in use likely to come over the next three years in the areas of customer support, compliance, and fraud detection.
Dependency risk
There has been a significant increase in the deployment of third-party implementations and a consequent increase in perceived third-party dependency risk. The survey included a deep-dive into the “top three third-party providers of cloud, models, and data” and it is notable that “the top three third-party providers of cloud, models and data accounted for 73%, 44% and 33% of all named providers respectively”.
Materiality
In terms of materiality (defined by reference to potential impact), the majority of AI use cases were “rated as low materiality”, with most use cases occurring in operations and IT. The high materiality cases “were most common in general insurance, risk and compliance, and retail banking”.
Governance
The respondent firms were asked about their approach to governance and the responses suggest that by far the most common approaches taken include having an accountable person (or persons (usually from the board but also from development, data or business area use teams)) with overall responsibility for AI, using an AI framework, adopting principles, guidance or best practice, and data governance. Data management is a key concern for firms with data privacy and security topping the list of more granular concerns within that overall category of risk.
Data
Data risks appear to come out at the top of the list of concerns with “four of the top five risks related to the use of data” and the “three biggest current risks” being identified as “data privacy and protection, data quality, and data security”. Data risks ranked closely behind only cybersecurity and critical third-party dependencies in terms of the highest ranking potential systemic risks and were identified as the “greatest regulatory constraint”.
Non-regulatory constraints
The table depicting responses to the survey around “non-regulatory constraints to AI adoption” (credit for the image shown to the FCA and to the Bank of England: Artificial intelligence in UK financial services - 2024 | Bank of England) demonstrate a diverse range of overlapping concerns and constraints, with data once again coming out at the top of the list:
It is interesting that the second constraint on this list relates to expertise, and that that issue has a significant degree of overlap with the third constraint, which relates to transparency and explainability. These are complex and inter-related issues which will require much thought going forward especially as AI and ML capabilities evolve and advance and, as the survey demonstrates, we are at a point in relation to expertise that indicates only a “partial understanding” of the AI technology that is currently in use in the sector.
Click to meet our financial services lawyers and, so as not to miss an update, subscribe to our regular financial services regulation round-up by using this link.
The top three non-regulatory constraints were rated as safety, security and robustness, insufficient talent/access to skills, and appropriate transparency and explainability.
https://www.bankofengland.co.uk/report/2024/artificial-intelligence-in-uk-financial-services-2024