Post written by Harvey Spencer, Trainee Solicitor, Funds & Financial Regulation

In recent months the FCA has on a number of occasions discussed its evolving approach to AI regulation.

Back in July an FCA board meeting took place at which the board:

  1. “noted that integration of AI into the regulatory landscape required a global framework and international engagement”;
  2. “raised the question of how one could “foresee harm” (under the new Consumer Duty), and also give customers appropriate disclosure, in the context of the operation of AI”;
  3. “discussed the FCA’s own resourcing and capability on AI”;
  4. “considered it was important to discuss opportunities for achieving good outcomes for customers, integrity in markets, as well as efficiencies in firms”; and
  5. noted the FCA CEO Nikhil Rathi’s recent speech on AI (which we have already covered in detail here.)

This was followed by a report published in September by The Alan Turing Institute on the finance sector-specific opportunities and challenges presented by AI. The report urges regulatory authorities to “shift from a reactive to a proactive stance on AI and its implications” and to “[foster] collaboration” between regulators and AI developers. Mr Rathi’s stated intentions to open an AI sandbox and to collaborate with Big Tech firms on data-sharing may well be evidence of the regulator doing just that.

Unsurprisingly, the potential risks of AI are high on the agenda for both the FCA and The Alan Turing Institute. Both indicate that big industry players are expected to invest in fraud prevention, cyber resilience and a robust “human-in-the-loop” system for decision making.

Jessica Rusu, the FCA’s Chief Data, Information and Intelligence Officer, spoke on this recently at the City and Financial Global AI Regulation Summit. Rusu used the metaphor of a “digital coin toss” to illustrate AI’s conflicting potential: transformative efficiency and accessibility versus catastrophic risk and safety concerns. The coin toss can be weighted in our favour, according to Rusu, with a strong regulatory framework, a pro-innovation approach and collaboration between key players.

Discussing the risks of AI, Rusu placed the onus on firms to take responsibility for their own operational resilience amidst a rise in AI scams. Importantly, this responsibility remains with a firm even where its services are outsourced to third parties.

Rusu also reiterated the importance of “ethical data usage” and questioned whether AI’s ability to detect and exploit patterns in data is always helpful.

Whilst clearly conscious of its risks, the FCA have harnessed AI to improve their own consumer protection capabilities. Rusu highlighted the newly-developed “web-scraping and social media monitoring tools” that can detect and triage potential scams as examples of this. At this month’s Annual Public Meeting, the FCA confirmed that its own digital sandbox will be launched soon, with a vast reserve of synthetic data to support testing on areas such as greenwashing regulation.

We will provide an update on the FCA’s final Feedback Statement on the AI Discussion Paper when it is published later this month.