The Centre for Data Ethics and Innovation (CDEI) - an independent advisory body with a mandate to enable ethical innovation in AI and data-driven technologies - is two years old. In a report looking at what it has achieved, it also looks at how to create the conditions for responsible innovation, including the role of AI assurance.
Three key challenges
The CDEI have identified from its work to date that there is the a need to:
1. develop and maintain accountability when deploying data-driven technologies
For example, in the CDEI's review into bias in algorithmic decision-making, there is a "the risk of algorithms obscuring the accountabilities and liabilities that individual people or organisations have for making fair decisions." Decision-makers need to retain accountability for decisions regardless of whether they are made by algorithms or humans. Industry standards are also required. And regulators need to share capabilities to effectively regulate AI in their sectors (rather than a cross-sectoral AI regulator).
2. address the transparency and explainability of data-driven systems
Technology offers opportunities to improve accountability and transparency "especially where algorithms are used when making significant decisions about individuals". This is particularly the case in the public sector. For example, the CDEI are working with the Cabinet Office's Central Digital and Data Office to develop standards for algorithmic transparency in the public sector (and other government departments have published guidance for when public bodies use automated decision-making with a view to ensure an ethical and transparent approach).
3. improve access to high quality data
This is "crucial to the development, deployment and evaluation of data-driven technologies". New data stewardship models are required that enable data to be handled safely and responsibly. And technological solutions, such as synthetic data, can help preserve privacy and confidentiality.
Responsible innovation
The CDEI will prioritise three themes in their work to help foster responsible innovation and address the challenges above:
- Facilitate responsible data sharing, piloting new forms of data stewardship and governance
- Facilitate public sector innovation for the responsible development, deployment and use of AI and data
- Lay the foundations for the development of a strong AI assurance ecosystem, fostering an emerging industry in AI assurance services
Of particular interest is AI assurance. As the CDEI report explains "Assurance covers a number of governance mechanisms for third parties to develop trust in the compliance and risk of a system or organisation" and aims to ensure that those regulators, developers, and those who use or rely upon AI have confidence that it is safe and reliable. However, "the picture is fragmented, with insufficient consensus among organisations and within sectors on what it means to be accountable and transparent when, for example, relying on algorithmic decision-making - making it harder than it should be for organisations to do so confidently and responsibly". How can organisations or users know whether the use of AI and data-driven technologies is fair, safe or acceptable?
CDEI's work will assess how assurance approaches used in other sectors could be applied to AI, and the role of standards to support this. We can expect to see an AI assurance roadmap "to help industry, regulators, standards bodies, and government, think through their own roles in this emerging ecosystem."
In parallel, AI assurance is likely to come under the spotlight thanks to the EU's proposed regulation of AI, which requires high-risk AI systems to undergo mandatory third-party conformity assessments before they are placed on the market or put into service. The EU intends its regulations (even at the proposal stage) to shape the debate on AI regulation globally. As CDEI notes, "AI assurance is likely to become a significant economic activity in its own right and with strengths in research, legal and professional services, the UK is well placed to take a leadership role globally."
This article was written by David Varney and Tom Whittaker.
there is no inherent contradiction between the necessity of ethics and the drive for innovation. Data and data-driven technologies (including AI) can grow our economy and help to tackle deepseated societal challenges. But to harness this potential and build public trust over the long-term, the UK will need to develop effective governance that incentivises responsible innovation.