The Centre for Data Ethics and Innovation (“CDEI”), in collaboration with techUK, has published its Portfolio of AI assurance techniques.
The CDEI states that it will be of use for anybody involved in designing, developing, deploying or procuring AI-enabled systems, and showcases examples of assurance techniques being used in the real-world to support the continued development of trustworthy AI.
This is one of the actions identified in the UK White Paper to 'help innovators understand how AI assurance techniques can support wider AI governance, the government will launch a Portfolio of AI assurance techniques in Spring 2023. The Portfolio is a collaboration with industry to showcase how these tools are already being applied by businesses to real-world use cases and how they align with the AI regulatory principles' (for a flowchart to help navigate UK AI regulation, click here).
Here we draw out the key points.
What is Assurance?
The CDEI defines ‘assurance’ as building confidence in AI by 'by measuring, evaluating and communicating whether an AI system meets relevant criteria' such as
- Regulation
- Standards
- Ethical guidelines
- Organisational values
What is in the Portfolio?
The CDEI’s portfolio contains fourteen real-world case studies sourced from multiple sectors and a range of technical, procedural and educational approaches. The CDEI have mapped these techniques to the UK government’s white paper on AI regulation to support wider AI governance. These case studies aren't endorsed by government, but demonstrate the range of possible options that currently exist.
What are the Assurance Techniques?
The CDEI has detailed several techniques:
- Impact Assessment – Anticipates the effect of AI on environmental, equality, human rights, data protection and other outcomes.
- Impact Evaluation – Impact assessment conducted after implementation in a retrospective manner.
- Bias audit – Assesses inputs and outputs of algorithmic systems to determine if bias is present within decision making.
- Compliance Audit – Reviews companies’ adherence to internal policies, external regulations or legal requirements.
- Certification – Independent body attests that a product or service has met objective standards of quality or performance.
- Conformity Assessment – Assures that a product or service meets specified or claimed expectations before entering the market.
- Performance Testing – Assesses system performance under predetermined quantitative requirements or benchmarks.
- Formal Verification – Establishes if a system satisfies requirements using formal mathematics methods.
Different techniques may be used at various stages across the AI lifecycle.
The techniques should also be seen in the context of the wider AI assurance ecosystem, including:
- UK AI Standards Hub;
- OECD AI catalogue of tools and metrics for trustworthy AI;
- Open Data Institute (ODI) data assurance programme; and
- the UK government's push for the UK to be a global leader in responsible AI, for example, by holding the first global summit on AI in late 2023.
And what techniques are right for a specific AI system and its use will depend on the context. The UK White Paper proposed cross-sectoral principles for AI regulation: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. What they look like in practice will depend upon how regulators interpret their application in their regulatory remit and the context of each AI system.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong or another member of Burges Salmon's Technology team.
The portfolio of AI assurance techniques has been developed by the Centre for Data Ethics and Innovation (CDEI), initially in collaboration with techUK. The portfolio is useful for anybody involved in designing, developing, deploying or procuring AI-enabled systems, and showcases examples of AI assurance techniques being used in the real-world to support the development of trustworthy AI.
https://www.gov.uk/guidance/cdei-portfolio-of-ai-assurance-techniques