The Data Justice Lab at Cardiff University has produced the "the first comprehensive overview of [automated decision-making systems in public services] being cancelled across western democracies", giving the reasons for those cancellations and taking learnings from them.
The report is the outcome of research concerning paused or cancelled government automated systems from the UK, Australia, Canada, Europe, New Zealand and the U.S., in areas including fraud detection, child welfare and policing.
The report is important:
- automated decision-making systems (ADS) are being used more widely in the delivery of public services (including to inform decisions about service delivery and increase efficiency).
- the report refers to ADS as "technical systems designed to help or replace human decision making": they influence or replace humans making decisions for public bodies.
- the stakes are high: ADS are being used to detect crime and spot fraud, and to determine whether child protective services should act.
- it is not intended to discourage innovation. Instead, it makes recommendations designed to strengthen governance, increase transparency and trustworthiness, and to protect communities and individuals. It provides useful in depth case studies. And there is much we can learn from failed projects: "examples of testing and learning from piloting any type of system can and should be shared with the associated learning".
- (and, for those interested in the detail, the references provide a wealth of relevant resources).
Here we draw out some key points relating to the when, why, who and what?
When did the ADS fail?
Out of the 61 ADS cancellations studied, they failed at the following stages:
Stage of ADS failure | # |
Development / investigatory stage | 3 |
After Pilot / Testing | 9 |
After implementation / use | 31 |
Pre-emptive ban / moratorium | 18 |
Identifying the precise stage when a system failed is not straightforward. For example, the report expects that many of the pre-emptive ban / moratorium cases related to facial recognition for which a moratorium had been imposed but the systems had been trialled. This is a useful example of the difficulties of objectively monitoring ADS and AI across jurisdictions and contexts, and where information is limited.
Why did the ADS fail?
They failed for the following reasons:
Reason for ADS failure | # |
Government agency decision - effectiveness | 31 |
Civil society critique or protest | 26 |
Critical media investigation | 24 |
Legal action | 19 |
Government concern - privacy, fairness, bias, discrimination | 13 |
Critical government review | 12 |
Political intervention | 8 |
Government decision - procurement, ownership | 6 |
Other | 5 |
Corporate decision to cancel availability of system | 3 |
Sometimes there were multiple reasons for an ADS' failure. Those reasons may be in parallel or sequential. But they played a part, suggesting that the development of trustworthy ADS (and AI) must rely on various components which are symbiotic; no single policy measure will be a panacea.
Take an example: critical media investigation was "responsible for identifying trials or implemented systems whose existence was not widely known until reported. In this way, media coverage is playing a significant role in rendering visible the systems and their impact on people." Presumably, that provided the transparency needed for civil society critique which took the form of "community organisations raising concerns and research outputs that raised concerns about the impact of ADS".
Who was involved?
As expected, ADS stakeholders are various and multiple:
- some of the ADS were developed in-house by government organisations, some were purchased and some were outsourced to third party providers.
- the level of private company involvement in ADS varied across the 61 examples identified in the scoping study.
What are the recommendations?
The report identifies "10 recommendations which we believe are necessary to improve the landscape, culture and context of ADS use in the UK at local and national level", as follows:
- Create and maintain Public Registries.
- Resource public organisations including regulators to support greater transparency and accountability.
- Enhance procurement support.
- Require Equalities Impact Assessments and recognise the need to address systemic injustice
- Review the legality of uses of automated systems and publicly detail how they have assessed that a proposed ADS complies with the relevant legal framework.
- Shift the burden of proof required to implement an ADS i.e. "those introducing ADS should be required to demonstrate the effectiveness of the changes they are implementing".
- Engage the public and "engage the public and civil society in discussion and decisions around the use of ADS that will materially affect individuals and communities".
- Understand the “No Go” areas i.e. the areas the public deem unacceptable for the use of ADS.
- Take responsibility in accounting for ADS history i.e. understanding how previous ADS have been implemented and account for how previous failures have been addressed.
- Ensure a politics of care approach i.e. "One that involves: ensuring time is taken to consult and investigate if and how ADS should be used; consultation involves extended stakeholders including affected communities; provides the potential for meaningful engagement and public review which includes being responsive to criticism to emerge as well as the option to refuse use."
Comment
ADS (and AI) are dependent upon context, but often raise important legal, technical and social issues which are relevant to other ADS and in different contexts. Many of these recommendations are already being developed in other jurisdictions and contexts; that reflects the global (and universal) nature of the issues. The report provides a wealth of references to how and where these recommendations are being called for and implemented. But given the growing use of ADS globally, the report inevitably cannot be a complete sourcebook. So all stakeholders - including ADS developers and wider industry, regulators and local and national governments - need to keep up to date with developments, including learning from failed ADS projects.
Knowing what's relevant will involve considering the specific ADS or AI in question and the context and then applying it to the facts. As far as we know, there's no way to automate that completely. So, if you would like to discuss how you procure, develop and deploy ADS/AI, please contact Tom Whittaker or Martin Cook.
This article was written by Tom Whittaker and Trulie Taylor.
The Data Justice Lab has researched how public services are increasingly automated and government institutions at different levels are using data systems and AI. However, our latest report, Automating Public Services: Learning from Cancelled Systems, looks at another current development: The cancellation of automated decision-making systems (ADS) that did not fulfil their goals, led to serious harm, or met caused significant opposition through community mobilization, investigative reporting, or legal action. The report provides the first comprehensive overview of systems being cancelled across western democracies.
https://datajusticelab.org/2022/09/23/new-research-report-learning-from-cancelled-systems/