The UK government's Responsible Technology Adoption Unit (a directorate within the Department for Science, Innovation and Technology, and formerly the Centre for Data Ethics and Innovation) has published an updated Portfolio of AI Assurance Techniques. The portfolio offers access to:

Explore new use cases showing how real-world examples promote trustworthy AI development. Essential for anyone designing, deploying, or procuring AI.

Standing currently at 72 use cases submitted to RTAU (but which RTAU and UK government do not endorse), the Portfolio is searchable by:

  • use case;
  • sector;
  • principle, as set out in the UK government's White Paper on AI regulation (see here);
  • AI assurance technique (see below);
  • AI assurance technique approach, such as technical, procedural or educational.

What is AI assurance?

In RTAU's words, it's about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria, such as regulation, standards, ethical guidelines and organisational values. It also plays a role in identifying and managing AI risks.

Examples of AI assurance techniques

RTAU provides the following examples, each of which will apply to one or more stages of the AI lifecycle, and so may be used on their own or in combination:

  1. Impact assessment: Used to anticipate the effect of a system on environmental, equality, human rights, data protection, or other outcomes.
  2. Impact evaluation: Similar to impact assessments, but are conducted after a system has been implemented in a retrospective manner.
  3. Bias audit: Assessing the inputs and outputs of algorithmic systems to determine if there is unfair bias in the input data, the outcome of a decision or classification made by the system.
  4. Compliance audit: A review of a company’s adherence to internal policies and procedures, or external regulations or legal requirements. Specialised types of compliance audit include system and process audits and regulatory inspection.
  5. Certification: A process where an independent body attests that a product, service, organisation or individual has been tested against, and met, objective standards of quality or performance.
  6. Conformity assessment: Provides assurance that a product, service or system being supplied meets the expectations specified or claimed, prior to it entering the market. Conformity assessment includes activities such as testing, inspection and certification.
  7. Performance testing: Used to assess the performance of a system with respect to predetermined quantitative requirements or benchmarks.
  8. Formal verification: Establishes whether a system satisfies some requirements using the formal methods of mathematics.

Further detail about AI assurance is available in the UK government's Introduction to AI assurance.  The development of the Portfolio is a part of the UK's National AI Strategy pillar ‘Governing AI effectively’, and reflects ongoing work by UK government to developing an effective AI governance ecosystem, such as identifying potential barriers and enablers to effective AI governance.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid VarneyMartin Cook or any other member in our Technology team.

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.