The Data Justice Lab at Cardiff University has produced the "the first comprehensive overview of [automated decision-making systems in public services] being cancelled across western democracies", giving the reasons for those cancellations and taking learnings from them.  

The report is the outcome of research concerning paused or cancelled government automated systems from the UK, Australia, Canada, Europe, New Zealand and the U.S., in areas including fraud detection, child welfare and policing.

The report is important:

  • automated decision-making systems (ADS) are being used more widely in the delivery of public services (including to inform decisions about service delivery and increase efficiency).
  • the report refers to ADS as "technical systems designed to help or replace human decision making": they influence or replace humans making decisions for public bodies.
  • the stakes are high: ADS are being used to detect crime and spot fraud, and to determine whether child protective services should act.
  • it is not intended to discourage innovation.  Instead, it makes recommendations designed to strengthen governance, increase transparency and trustworthiness, and to protect communities and individuals.  It provides useful in depth case studies.  And there is much we can learn from failed projects: "examples of testing and learning from piloting any type of system can and should be shared with the associated learning".
  • (and, for those interested in the detail, the references provide a wealth of relevant resources).

Here we draw out some key points relating to the when, why, who and what?

When did the ADS fail?

Out of the 61 ADS cancellations studied, they failed at the following stages:

Stage of ADS failure#
Development / investigatory stage3
After Pilot / Testing9
After implementation / use31
Pre-emptive ban / moratorium18


Identifying the precise stage when a system failed is not straightforward.  For example, the report expects that many of the pre-emptive ban / moratorium cases related to facial recognition for which a moratorium had been imposed but the systems had been trialled.  This is a useful example of the difficulties of objectively monitoring ADS and AI across jurisdictions and contexts, and where information is limited. 

Why did the ADS fail?

They failed for the following reasons:

Reason for ADS failure#
Government agency decision - effectiveness31
Civil society critique or protest26
Critical media investigation24
Legal action19
Government concern - privacy, fairness, bias, discrimination13
Critical government review12
Political intervention8
Government decision - procurement, ownership6
Other5
Corporate decision to cancel availability of system3


Sometimes there were multiple reasons for an ADS' failure.  Those reasons may be in parallel or sequential.  But they played a part, suggesting that the development of trustworthy ADS (and AI) must rely on various components which are symbiotic; no single policy measure will be a panacea.  

Take an example: critical media investigation was "responsible for identifying trials or implemented systems whose existence was not widely known until reported. In this way, media coverage is playing a significant role in rendering visible the systems and their impact on people."  Presumably, that provided the transparency needed for civil society critique which took the form of "community organisations raising concerns and research outputs that raised concerns about the impact of ADS".

Who was involved?

As expected, ADS stakeholders are various and multiple:

  • some of the ADS were developed in-house by government organisations, some were purchased and some were outsourced to third party providers. 
  • the level of private company involvement in ADS varied across the 61 examples identified in the scoping study. 

What are the recommendations?

The report identifies "10 recommendations which we believe are necessary to improve the landscape, culture and context of ADS use in the UK at local and national level", as follows:

  1. Create and maintain Public Registries
  2. Resource public organisations including regulators to support greater transparency and accountability. 
  3. Enhance procurement support.
  4. Require Equalities Impact Assessments and recognise the need to address systemic injustice
  5. Review the legality of uses of automated systems and publicly detail how they have assessed that a proposed ADS complies with the relevant legal framework.
  6. Shift the burden of proof required to implement an ADS i.e. "those introducing ADS should be required to demonstrate the effectiveness of the changes they are implementing".
  7. Engage the public and "engage the public and civil society in discussion and decisions around the use of ADS that will materially affect individuals and communities".
  8. Understand the “No Go” areas i.e. the areas the public deem unacceptable for the use of ADS.
  9. Take responsibility in accounting for ADS history i.e. understanding how previous ADS have been implemented and account for how previous failures have been addressed.
  10. Ensure a politics of care approach i.e.  "One that involves: ensuring time is taken to consult and investigate if and how ADS should be used; consultation involves extended stakeholders including affected communities; provides the potential for meaningful engagement and public review which includes being responsive to criticism to emerge as well as the option to refuse use."

Comment

ADS (and AI) are dependent upon context, but often raise important legal, technical and social issues which are relevant to other ADS and in different contexts.  Many of these recommendations are already being developed in other jurisdictions and contexts; that reflects the global (and universal) nature of the issues.  The report provides a wealth of references to how and where these recommendations are being called for and implemented.  But given the growing use of ADS globally, the report inevitably cannot be a complete sourcebook.  So all stakeholders - including ADS developers and wider industry, regulators and local and national governments - need to keep up to date with developments, including learning from failed ADS projects.

Knowing what's relevant will involve considering the specific ADS or AI in question and the context and then applying it to the facts.  As far as we know, there's no way to automate that completely.  So, if you would like to discuss how you procure, develop and deploy ADS/AI, please contact Tom Whittaker or Martin Cook.

This article was written by Tom Whittaker and Trulie Taylor.