In the news headlines this week there have been many stories about the potential for the transformational use of AI in financial services, including some using one of the latest buzzwords, ‘agentic’ AI. What's that?

Agentic AI

Agentic AI, in very basic terms, is AI that can handle complex tasks and apply reasoning and problem-solving to those tasks. Agentic AI is a step away from AI that is used to assist with the performance of repetitive or simple tasks. Agentic AI is AI that has some sort of learning or reasoning ability and can adapt to new situations. In the financial services context, agentic AI might be used to detect fraudulent activities from market data and react proactively to it, process credit applications from start to finish, or deliver personalised pension advice recommendations.

AI gaffes

In the same week of news, we also saw a story about AI giving some misinformed statistics about the popularity of gouda cheese. It seems that AI may have made mistakes about cheese before, when last year AI recommended that glue might be used to better stick it to our pizzas. So, if you could not 100% trust AI to recommend the most popular cheese, or give the best advice for pizza toppings, could you trust it with, say, your mortgage application? The answer is probably, that one day you might be able to, but not quite yet.

Big questions

There are some big questions to grapple with before AI goes mainstream in financial services, and before AI might reliably be trusted to make complex decisions about our financial well-being. Some of these questions include:

  • Who understands how AI works and the outputs that it generates?
  • Do we need those people who understand AI on the boards of our financial services firms?
  • Do we need to keep humans in the loop and is that going to be possible as AI evolves at pace?
  • How do we maintain and support the responsible use of AI in financial services?
  • How will we know which AI providers are resilient enough and of sufficient quality to be involved in the financial services markets?
  • Should AI providers meet recognised quality standards?
  • Should AI professionals have to meet standards or codes of practice?
  • What general safeguards are needed around the use of AI in financial services?
  • How do we protect personal data?
  • Should AI be outsourced or developed in-house?
  • Does regulation have any hope of keeping pace with AI?
  • How would we upskill the AI skills of the consumers of AI powered financial services?

The list of questions goes on. Many, including the government and the financial services regulators, have been grappling with these questions, and others like them, for years. It is currently difficult, but not impossible, to see how the financial services industry will gain the confidence to move from the use of AI in back-office, desk-managing, routine kinds of repetitive tasks to some form of more transformational use. Without a doubt it will happen, the question is when and how. 

If you would like to discuss how current or future regulations impact what you do with AI, please contact me or Martin Cook, Tom Whittaker, Brian Wong, Lucy Pegler, or any other member of our Technology Team. Click here to meet our financial services lawyers. You can subscribe to our regular financial services regulation round-up by using this link.