AI assurance is set to play a crucial role in delivering the UK National AI strategy's effective governance objective according to the Centre for Data Ethics and Innovation (CDEI) in its Roadmap to an Effective AI Assurance Ecosystem. The Roadmap sets out the steps needed to grow a "mature", "world-class" AI assurance industry in the UK.  The CDEI calls for a similar approach to assurance already seen in other sectors, such as financial reporting and data protection, to enable businesses, users and regulators to have trust in whether AI systems are effective, trustworthy and legal.

The roadmap looks at six areas for development: 1. Generating demand for assurance; 2. Supporting the market for assurance; 3. Developing standards; 4. The role of professionalisation and specialised skills; 5. The role of regulation; and 6. The role of independent researchers.

Here we briefly set out what is AI assurance and the role it will play in future regulation of AI.

AI assurance as part of the UK AI Strategy and potential regulation

The UK launched its National AI Strategy in September 2021.  One of the strategy's pillars is Governing Effectively - ensuring the UK gets the national and international governance of AI technologies right to encourage innovation, investment, and protect the public and our fundamental values.  This would be achieved, in part, by the CDEI publishing the AI assurance roadmap.

The roadmap "follows calls from public bodies including the Committee on Standards in Public Life, and industry, to build an ecosystem of tools and services that can identify and mitigate the range of risks posed by AI and drive trustworthy adoption."

AI assurance is also a key part of potential regulation of AI.  The UK is set to publish a White Paper on regulating AI in early 2022, under which the governance of AI is anticipated to highlight the role assurance will play in ensuring AI systems meet their regulatory obligations.

Similarly under the EU’s proposed AI Act, high-risk AI systems will be subject to strict obligations and AI assurance before they can enter the market, such as assessment of risks, mitigation systems and traceability. We wrote about the EU's proposal here and an update in November 2021 on its progress here.

AI Assurance and trust

AI systems are being developed and deployed across the UK economy.  AI promises great benefits but carry risks which need to be managed (see here three categories of AI risk identified by the US National Institute for Science and Technology).  Assurance is about building the confidence and trust in an AI system which is crucial so that:

  • the responsible party can demonstrate and communicate the trustworthiness of the AI system - for example, the company procuring the AI system wants to know that the system functions as expected;
  • the assurance user needs to trust the responsible party because they are affected by how the responsible party deploys the AI system - this can include the direct user of the assurance, such as the company director who is responsible for governance of the system, or indirect users, such as the end users;    
  • regulators and professional bodies can understand what the AI system does, how and why.

As the roadmap says:

This is where assurance is important. Being assured is about having confidence or trust in something, for example a system or process, documentation, a product or an organisation.

Assurance addresses two issues:

- the information problem - organisations need information to reliably and consistently evaluate whether an AI system is trustworthy. For example, how the system performs, is governed, and whether they are compliant with standards and regulation.  Assurance seeks to ensure that information is made available; and

-  the communication problem - organisations need to then communicate their evidence to others so that trust can be placed in the AI system.  Assurance seeks to provide that communication in a consistent, understandable, useful way. 

The roadmap looks to the more mature assurance ecosystems in other industries, in particular, accounting.  Whilst there are similar roles, responsibilities and institutions across those industries - providing transferrable assurance approaches - there is variation in how assurance is performed. These include: impact assessments; bias and compliance audits; certification; performance testing; formal verification.

However, "in each case, the need to assure different subject matters has led to variation in the development and use of specific assurance models, to achieve the same ends."

What assurance methods are required (or are simply beneficial) will depend on the specific requirements of each AI system, the responsible party and the users, and with consideration to the relevant legislation and regulation.  

Different regulators may take different approaches; the Information Commissioner's Office has already begun to develop a number of initiatives to ensure that AI systems are developed and used in a trustworthy manner. 

However, organisations looking to develop and deploy AI systems will want to take a closer look at AI-specific assurance approaches not only because of cross-sector and industry-specific AI regulation, but because of the commercial benefits of good governance and demonstrating trust to consumers.

This article was written by Tom Whittaker and Eve Jenkins