The UK's Cabinet Office’s Central Digital and Data Office has published an algorithmic transparency standard for collecting information about how government uses algorithmic tools.  The standard is to be piloted by several UK public sector bodies with the Centre for Data Ethics and Innovation (CDEI).  Here we look at why it has been published and what it includes.

Why has the standard been published?

One of the three pillars of the UK's National AI strategy (which we wrote about here) is "Governing AI effectively".  The strategy recognises that the public sector must be an exemplar: "The government must lead from the front and set an example in the safe and ethical deployment of AI."  

The strategy recognised and committed to the CDEI's recommendation in its report on algorithmic bias (which we wrote about here) that:

Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Government should conduct a project to scope this obligation more precisely, and to pilot an approach to implement it, but it should require the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.

The AI strategy recognises that creation of a national standard helps:

  • to ensure that citizens "have confidence and trust in how data is being processed and analysed to derive insights";
  • extend the "UK's long standing open data and data ethics leadership".

The publication is touted as one of the world’s first national algorithmic transparency standards. Publication of the standard now also recognises that governance specifically for AI are still relatively new.  Adrian Weller, Programme Director for AI at The Alan Turing Institute and Member of the Centre for Data Ethics and Innovation’s Advisory Board, said: "Organisations are increasingly turning to algorithms to automate or support decision-making. We have a window of opportunity to put the right governance mechanisms in place as adoption increases."

The standard must also be seen as part of the bigger picture.  The UK's AI strategy is to make "Britain a global AI superpower".  Developments internationally in AI ethics and regulation have strike a balance between risk management and cultivating innovation, with an eye to how those proposals compare to other jurisdictions so that they are competitive.  Different jurisdictions are taking different approaches to regulating AI (as we wrote about here).  Publishing standards also helps to strengthen "the UK’s position as a world leader in AI governance".  The EU's proposed regulation of AI, explicitly recognises that those regulations will help set standards globally, just as the GDPR did.  So expect to see the UK's standard influencing those developed elsewhere.

What does the standard include?

The Algorithmic Transparency standard and template are designed so that public sector organisations maintain the following information in a standardised way:

  • a short non-technical description of the algorithmic tool, and an overview of what the tool is and why the tool’s being used - including an overview of:
    • how the tool works and is incorporated into the decision-making process; and
    • why the tool is being used, including what the problem is that is trying to be solved, and how people can find out more about the tool or ask a question.
  • more detailed technical information, such as specific details on how your tool works and the data the tool uses - including information about:
    • who owns and has responsibility for the algorithmic tool;
    • what the tool is for and its technical specifications;
    • more detail about how the tool affects decision-making, including what decisions humans take in the overall process, including options for humans reviewing the tool;
    • the datasets used, an overview of how it was used to train and run the algorithmic tool, how and why the data was collected, details on who has access to the data;
    • risks and impact assessments.

The standard does not specify how much explanation is required.  The fact that the standard requires different levels of information ("short non-technical description" compared to "more detailed technical information") indicates that public bodies should bear in mind who the potential audience for the information recorded.  Are they citizens, who may have less technical understanding of how AI works?  Are they technical experts used by the public authority to procure, develop, deploy and audit algorithmic tools?  Different levels of detail will be required for each.

The amount of explanation will also depend on the circumstances.  The greater the risks  of using the algorithmic tool presumably the greater the explanation required.  However, any public body using the standard will need to consider its legal and regulatory obligations in each case of using algorithmic tools to understand what information they should be collecting and in what detail.

Whilst the goal is for the standard to be mandatory for "all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals" the standard remains in a pilot phase.  So for now the standard is a useful indicator of the types of information that a public body - and indeed private organisations - are likely to want to record about how they use algorithmic tools and the format in which to record that information.

This article was written by Tom Whittaker and James Flint.