The FCA has published a speech given by Jessica Rusu (FCA Chief Data, Information and Intelligence Officer) at the City and Financial Global summit on the regulation and risk management of artificial intelligence (AI) in financial services.
We set out below some highlights from her speech:
- There are real and legitimate concerns around economic stability as well as ethical questions surrounding the ability of AI to mimic human intelligence, but also more immediate concerns over firms exploiting consumer data and the privacy concerns associated with using AI for the purposes of hyper-targeting.
- Following its recent survey on the use of machine learning and AI in financial services, the FCA and Bank of England found that the biggest risk for consumers is data bias and data representativeness, while the biggest risk for firms was a lack of AI explainability.
- In line with the FCA and Bank of England’s AI Discussion Paper, there is a key question for financial services surrounding whether AI can be managed through fine-tuning the existing regulatory framework, or whether a new approach is needed. The speech suggests that the Senior Managers’ and Certification Regime (SMCR) can be applied to many of the new regulatory challenges posed by the use of AI in UK financial markets.
- Data is of particular importance to AI, and data quality assessments to determine relevance, representation and suitability are heightened in the context of AI. Safe and responsible AI adoption must be underpinned by high-quality data and the use of AI must be fully compliant with existing legislation, in particular the data protection regime.
- There are practical challenges that any AI governance mechanism must address, including:
- Responsibility - who monitors, controls and supervises the design, development, deployment and evaluation of AI models;
- Creating a framework for dealing with novel challenges, such as AI explainabilty; and
- How effective governance contributes to the creation of a community of stakeholders with a shared skill set and technical understanding.
The speech makes clear that the responsibility for AI governance rests with firms and that agency cannot be attributed AI systems.
The speech concludes by highlighting the need to leverage what is already in place in terms of existing regulatory frameworks and adapt them as technologies change.
Machines don’t have agency, humans do Jessica Rusu, Chief Data, Information and Intelligence Officer, Financial Conduct Authority