The UK's National AI strategy includes the goal for the UK to be the world-leading AI assurance ecosystem.  AI assurance is an ecosystem of 'mechanisms to assess and communicate reliable evidence about the trustworthiness of AI systems'. The UK's CDEI (Centre for Data Ethics and Innovation)’s published a Roadmap to effective AI assurance ecosystem and recently published their 'Industry Temperature Barriers and Enablers to AI Assurance'.  Based on discussions with industry and tracking industry engagement with AI assurance, the report highlights the key barriers to, and enablers of, AI assurance in HR, finance, and connected and automated vehicles. Here we highlight the key points.

How does industry view and use AI assurance?

The CDEI have found a number of key themes:

  • AI assurance is seen as part of wider risk management - existing frameworks may be adaptable to AI systems;
  • industry supports a proportionate response to assurance.  Context is important; the CDEI chose to engage with industry on a sector-based approach to align with 'the with the UK’s decentralised, context-based approach to AI regulation, as outlined in “Establishing a pro-innovation approach to AI regulation';
  • third-party certification/accreditation is welcome - they provide an 'impartial perspective' - provided they are consistent and robust.
  • standards to support AI assurance techniques - participants reflected a variety of approaches; some using standards developed by standards development organisations, some using their own, or some not using any because the standards landscape is 'complex and difficult to navigate';
  • impact assessment are a frequently used assurance technique;
  • assurance can provide a competitive edge - through building customer trust and managing reputational risk;
  • regulatory compliance is a key driver of assurance.

Barriers identified 

Respondents were asked to identify barriers to engaging with AI assurance techniques within their respective sectors - HR & Recruitment, Finance and Connected and Automated Vehicles (CAVs).  The summary of findings reflect the importance of context and sector.

BarrierHR & RecruitmentFinanceCAVs
Lack of knowledge/skillsHigh PriorityHigh PriorityHigh Priority
Lack of guidanceBarrier not identifiedMedium PriorityMedium Priority
Lack of awareness of available assurance techniquesMedium PriorityHigh PriorityLow Priority
Lack of awareness of available standardsBarrier not identifiedHigh PriorityMedium Priority
Lack of signposted best practiceBarrier not identifiedHigh PriorityHigh Priority
Lack of demand (both internal/external)High PriorityMedium PriorityHigh Priority
Difficult to choose an appropriate technique/standardLow PriorityMedium PriorityLow Priority
Financial cost of standardsBarrier not identifiedBarrier not identifiedLow Priority
Lack of international interoperabilityLow PriorityLow PriorityLow Priority
Regulatory uncertaintyBarrier not identifiedHigh PriorityLow Priority
Lack of mechanisms to recognise assurance effortsBarrier not identifiedLow PriorityHigh Priority
Complexity of standards landscapeBarrier not identifiedLow PriorityLow Priority


The barriers can also be categorised, helpful to understand where, when and with whom barriers arise: workforce barriers (e.g. lack of knowledge/skills); organisational barriers (e.g. lack of buy-in or resources); operational/market barriers (e.g. lack of standardised/unified approach); governance barriers (e.g. regulatory uncertainty).

Proposed interventions to help AI Assurance

The report went on to suggest actions to support industries in combating these barriers and adopting assurance techniques and standards.

  • General L&D and sector-specific guidance – e.g. CDEI / REC Data-driven tools in recruitment guidance 
  • Toolkit of AI assurance techniques – to help awareness on available assurance techniques.  The OECD is currently in the process of developing a database of tools for AI assurance.  This will provide information to users and policymakers on the latest tools that help ensure that AI systems operate with fairness, transparency, security and accountability.
  • AI standards repository – to help increase awareness of technical standards.  The UK's AI Standards Hub provides an international platform dedicated to sharing knowledge and research on AI standards.
  • Demonstrate the value of assurance – e.g. CDEI AI Assurance guide.
  • Mature governance & regulatory landscape – A policy paper for the Responsible Innovation in Self-Driving Vehicles has already been published by the CDEI.  Further reports within this area and the other sectors can assist in influencing the regulatory landscape as the uptake of AI becomes more frequent, this can create clear mechanisms to recognise assurance efforts.
  • Examples of good practice – 'In Spring 2023, the CDEI will publish a portfolio of AI assurance case studies, to showcase examples of how different organisations are using assurance techniques across sectors and provide a reference of a starting point for assurance good practice.' 
  • Additional regulatory clarity - including the UK's White Paper on how AI will be regulated.

Next steps

CDEI will continue to 'encourage industry adoption of AI assurance techniques and standard' and will look at practical support at what can be offered for AI assurance.

In the meantime, companies looking to build or buy AI will be considering legal assurance given anticipated AI regulations, including as the EU's AI Act is one step closer and we await the UK's White Paper on AI regulation in the UK.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.