The Centre for Data Ethics and Innovation has published its review into bias in algorithmic decision-making; how to use algorithms to promote fairness, not undermine it. We wrote recently about the report's observations on good governance of AI. Here, we look at the report's recommendations around transparency of artificial intelligence and algorithmic decision-making used in the public sector (we use AI here as shorthand).

The need for transparency

The public sector makes decisions which can have significant impacts on private citizens, for example related to individual liberty or entitlement to essential public services. The report notes that there is increasing recognition of the opportunities offered through the use of data and AI in decision-making. Whether those decisions are made using AI or not, transparency continues to be important to ensure that:

  • public bodies use public money responsibly;
  • risks are managed appropriately;
  • those who make decisions can be held accountable;
  • standards are improved; and
  • there is public trust in the use of AI in the public sector.
However, the report identifies, in our view, three particular difficulties when trying to apply transparency to public sector use of AI.

First, the risks are different. As the report explains at length there is a risk of bias when using AI. For example, where groups of people within a subgroup is small, data used to make generalisations can result in disproportionately high error rates amongst minority groups. In many applications of predictive technologies, false positives may have limited impact on the individual. However in particularly sensitive areas, false negatives and positives both carry significant consequences, and biases may mean certain people are more likely to experience these negative effects. The risk of using AI can be particularly great for decisions made by public bodies given the significant impacts they can have on individuals and groups.

Second, the CDEI's interviews found that it is difficult to map how widespread algorithmic decision-making is in local government. Without transparency requirements it is more difficult to see when AI is used in the public sector which risks suggested intended opacity (see our previous article on widespread use by local councils of algorithmic decision-making here), how the risks are managed, or to understand how decisions are made.

Third, there are already several transparency requirements on the public sector (think publications of public sector internal decision-making guidance, or equality impact assessments) but public bodies may find it unclear how some of these should be applied in the context of AI (data protection is a notable exception given guidance by the Information Commissioner's Office).

What is transparency?

What transparency means depends on the context. Transparency doesn’t necessarily mean publishing algorithms in their entirety. That is unlikely to improve understanding or trust in how they are used. And the report recognises that some citizens may make, rightly or wrongly, decisions based on what they believe the published algorithms means.

The report sets out useful requirements to bear in mind when considering what type of transparency is desirable:

  • Accessible: interested people should be able to find it easily.
  • Intelligible: they should be able to understand it.
  • Useable: it should address their concerns.
  • Assessable: if requested, the basis for any claims should be available
Recommendation - transparency obligation

In order to give clarity to what is meant by transparency, and to improve it, the report recommends:

Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence [by affecting the outcome in a meaningful way] on significant decisions [i.e. that have a direct impact, most likely one that has an adverse legal impact or significantly affects] affecting individuals. Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.

Some exceptions will be required, such as where transparency risks compromising outcomes, intellectual property, or for security & defence.

Further clarifications to the obligation, such as the meaning of "significant decisions" will also be required. As a starting point, though, the report anticipates a mandatory transparency publication to include:

  1. Overall details of the decision-making process in which an algorithm/model is used.
  2. A description of how the algorithm/model is used within this process (including how humans provide oversight of decisions and the overall operation of the decision-making process).
  3. An overview of the algorithm/model itself and how it was developed, covering for example:
  • The type of machine learning technique used to generate the model.
  • A description of the data on which it was trained, an assessment of the known limitations of the data and any steps taken to address or mitigate these.
  • The steps taken to consider and monitor fairness.
  • An explanation of the rationale for why the overall decision-making process was designed in this way, including impact assessments covering data protection, equalities, human rights, carried out in line with relevant legislation. It is important to emphasise that this cannot be limited to the detailed design of the algorithm itself, but also needs to consider the impact of automation within the overall process, circumstances where the algorithm isn't applicable, and indeed whether the use of an algorithm is appropriate at all in the context.
The report expects that identifying the right level of information on the AI is the most novel aspect. CDEI expect that other examples of transparency may be a useful reference, including the Government of Canada’s Algorithmic Impact Assessment, a questionnaire designed to help organisations assess and mitigate the risks associated with deploying an automated decision system (and which we referred to in a recent post about global perspectives on regulating for algorithmic accountability).

A public register?

Falling short of an official recommendation, the CDEI also notes that the House of Lords Science and Technology Select Committee and the Law Society have both recently recommended that parts of the public sector should maintain a register of algorithms in development or use (these echo calls from others for such a register as part of a discussion on the UK's National Data Strategy). However, the report notes the complexity in achieving such a register and therefore concludes that "the starting point here is to set an overall transparency obligation, and for the government to decide on the best way to coordinate this as it considers implementation" with a potential register to be piloted in a specific part of the public sector.