Barriers and enablers to effective AI assurance

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK's National AI strategy includes the goal for the UK to be the world-leading AI assurance ecosystem. AI assurance is an ecosystem of 'mechanisms to assess and communicate reliable evidence about the trustworthiness of AI systems'. The UK's CDEI (Centre for Data Ethics and Innovation)’s published a Roadmap to effective AI assurance ecosystem and recently published their 'Industry Temperature Barriers and Enablers to AI Assurance'. Based on discussions with industry and tracking industry engagement with AI assurance, the report highlights the key barriers to, and enablers of, AI assurance in HR, finance, and connected and automated vehicles. Here we highlight the key points.
The CDEI have found a number of key themes:
Respondents were asked to identify barriers to engaging with AI assurance techniques within their respective sectors - HR & Recruitment, Finance and Connected and Automated Vehicles (CAVs). The summary of findings reflect the importance of context and sector.
Barrier | HR & Recruitment | Finance | CAVs |
---|---|---|---|
Lack of knowledge/skills | High Priority | High Priority | High Priority |
Lack of guidance | Barrier not identified | Medium Priority | Medium Priority |
Lack of awareness of available assurance techniques | Medium Priority | High Priority | Low Priority |
Lack of awareness of available standards | Barrier not identified | High Priority | Medium Priority |
Lack of signposted best practice | Barrier not identified | High Priority | High Priority |
Lack of demand (both internal/external) | High Priority | Medium Priority | High Priority |
Difficult to choose an appropriate technique/standard | Low Priority | Medium Priority | Low Priority |
Financial cost of standards | Barrier not identified | Barrier not identified | Low Priority |
Lack of international interoperability | Low Priority | Low Priority | Low Priority |
Regulatory uncertainty | Barrier not identified | High Priority | Low Priority |
Lack of mechanisms to recognise assurance efforts | Barrier not identified | Low Priority | High Priority |
Complexity of standards landscape | Barrier not identified | Low Priority | Low Priority |
The barriers can also be categorised, helpful to understand where, when and with whom barriers arise: workforce barriers (e.g. lack of knowledge/skills); organisational barriers (e.g. lack of buy-in or resources); operational/market barriers (e.g. lack of standardised/unified approach); governance barriers (e.g. regulatory uncertainty).
The report went on to suggest actions to support industries in combating these barriers and adopting assurance techniques and standards.
CDEI will continue to 'encourage industry adoption of AI assurance techniques and standard' and will look at practical support at what can be offered for AI assurance.
In the meantime, companies looking to build or buy AI will be considering legal assurance given anticipated AI regulations, including as the EU's AI Act is one step closer and we await the UK's White Paper on AI regulation in the UK.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.
AI assurance - mechanisms to assess and communicate reliable evidence about the trustworthiness of AI systems - has an important role to play in helping to achieve the government’s ambitions for a risk based, pro-growth approach to AI governance.