The UK's Department for Science, Innovation and Technology (DSIT) has published a report about 'Accelerating the growth of the UK's AI assurance market' (here). For an overview of AI assurance, DSIT has also produced an introductory guide to AI assurance (here).
Here we summarise the key points about DSIT's report into the AI assurance market, and the UK government's actions to grow the demand and supply of AI assurance.
The AI Assurance market is growing
DSIT's research has found:
- "There are currently an estimated 524 firms supplying AI assurance goods and services in the UK, including 84 specialised AI assurance companies. This compares to 17 specialised AI assurance companies identified in HMG’s 2023 AI Sector Study, “indicating that the market for specialised AI assurance companies has grown significantly in the space of 2 years.”
- “Altogether, these 524 companies are generating an estimated £1.01bn and employ an estimated 12,572 employees, making the UK’s AI assurance market bigger relative to its economic activity than those in the US, Germany and France.”
- “Despite evidence that both demand and supply are currently below their potential, there are strong indications that the market is set to continue growing, with the potential to exceed £6.53bn by 2035 if opportunities to drive future growth are realised. ”
DSIT's research has found that AI assurance suppliers provide two main groups of products and services:
- consulting, advisory, training and/or procedural services and tools to help developers and deployers think through their AI assurance strategy and set up the right processes to ensure effective AI assurance.
- technical tools that are used to assess AI systems.
DSIT considers that AI accreditation services may become a third group of products provided in this market in the future, but “there are currently no fully active certification schemes for AI assurance in the UK”.
Challenges
However, there are various reasons why demand for AI assurance tools and services may be limited:
- lack of understanding among consumers of AI assurance about the risks posed by AI, relevant regulatory requirements, and the value of AI assurance;
- lack of quality infrastructure and limited access to information about emerging AI models restricting the supply of third-party AI assurance tools and services;
- the use of differing frameworks and terminology between different sectors and jurisdictions fragmenting the AI governance landscape, inhibiting the interoperability of AI assurance, restricting global trade in AI assurance and limiting adoption of responsible AI. While UK government refers to ‘AI assurance’, other governance frameworks — such as that of the National Institute of Science and Technology (NIST) in the US — focus on ‘AI risk management’. The use of different terminology has fragmented the international landscape for AI governance, making it challenging for suppliers and consumers of AI assurance to navigate.
- the quality of available tools and services is not clear. The report notes that one World Privacy Forum report found that than 38% of AI governance tools they assessed either mention, recommend, or incorporate problematic metrics that could result in harm. Also, whilst assurance providers may emphasise the importance of technical standards, these are still in development (according to a recent DRCF study).
Actions
The UK government seeks to drive demand and supply for AI assurance tools and services by:
- developing an AI Assurance Platform to provide a “one-stop-shop for information on AI assurance and host … resources”.
- working with industry to develop a ‘Roadmap to trusted third-party AI assurance’ and collaborating with the AI Safety Institute (AISI) to advance AI assurance research.
- enabling the interoperability of AI assurance by developing a Terminology Tool for Responsible AI.
- publishing further sector-specific guidance. The first was for companies procuring and deploying AI for recruitment (March 2024, see our summary here) and further guidance, including financial services, will be published “in the near future”.
The report also makes clear that UK government considers it has already taken steps to improve understanding of AI assurance, including an introduction to AI assurance (see here) a portfolio of AI assurance techniques (see here), and launching a consultation for an AI Management Essentials programme (see here).
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.
AI is at the heart of the Government’s plan to kickstart an era of economic growth, transform how we deliver public services, and boost living standards for working people across the country. My ambition is to drive adoption of AI, ensuring it is safely and responsibly developed and deployed across Britain, with the benefts shared widely. The Rt Hon Peter Kyle MP Secretary of State for Science, Innovation and Technology