The Office for Artificial Intelligence, Cabinet Office, and Central Digital & Data Office have published guidance for when public bodies use automated decision-making with a view to ensure an ethical and transparent approach.

What is automated decision-making?

Automated decision-making encompasses (1) solely automated decisions (without human input) and (2) automated assisted decision-making (assisting or enhancing human judgement).

Example of solely automated decision-making

  • A worker’s pay is linked to their productivity, which is monitored using an automated system. The decision for how much pay the worker receives for each shift they work is made automatically by referring to the data collected about their productivity.

Example of automated decision-making assisting human judgement

  • An employee is issued with a warning about late attendance. The warning was issued because the employer’s automated clock-in system highlighted that the employee had been late on a number of occasions. The actual decision to issue a warning was then taken by the employer’s manager after being informed by the automated system.

The need for guidance

Public bodies are increasingly using automated decision-making (see our article on the widespread use of algorithmic decision-making by local councils). Although automated decision-making brings (or is supposed to bring) benefits, there are risks which need to be managed and good governance is required (see the Centre for Data Ethics and Innovation's reports on the risk of bias in algorithmic decision-making and the need for good governance). 

Of particular concern is that the guidance notes recent surveys suggesting a "distinct distrust in the regulation of advanced technology."  A review by the Committee on Standards in Public Life found that the government should produce clearer guidance on using artificial intelligence ethically in the public sector.  However, the guidance notes that current guidance can be lengthy, complex and sometimes overly abstract.

Guidance framework 

This guidance provides a 7 point framework to "help government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems."  The framework is applicable to both forms of automated decision-making. 

For each point the framework offers ‘Practical Steps’ (as well as further reading) – we include one Practical Step for each point by way of illustration.

The guidance says that a public body using automated decision-making should do the following things: 

  1. Test to avoid any unintended outcomes or consequences. Prototype and test your algorithm or system so that it is fully understood, robust, sustainable and that it delivers the intended policy outcomes (and unintended consequences are identified). 

    Practical steps include: "be clear on what you are testing, e.g. is it accuracy, security, reliability, fairness, explainability of your system? Undertake regular impact and risk assessments, making sure testing is done by someone properly qualified and independent."

  2. Deliver fair services for all of our users and citizens. Involve a multidisciplinary and diverse team in the development of the algorithm or system to spot and counter prejudices, bias and discrimination.

    Practical steps include: Run ‘bias and safety bounties’, where ‘hackers’ are incentivised to seek out and identify discriminatory elements. You must also do an Equality Impact Assessment to comply with the Equality Act 2010 and Public Sector Equality Duty.

  3. Be clear who is responsible. Work on the assumption that every significant automated decision should be agreed by a minister and all major processes and services, subject to automation consideration, should have a senior owner.

    By way of comparison, we note a similar requirement in the Canadian Directive on Automated Decision-Making which requires "the Assistant Deputy Minister responsible for the program using the Automated Decision System, or any other person named by the Deputy Head, is responsible" for complying with the Directive's requirements for each algorithmic decision program (see our article which picks up on the Directive here).

    Practical steps include: Assign a senior owner or senior process owner to monitor all major processes and services, making them responsible for ensuring that the necessary mitigation and measures are taken so that the system does not cause unintended harm.

  4. Handle data safely and protect citizens’ interests. Ensure that the algorithm or system adequately protects and handles data safely, and is fully compliant with Data Protection legislation.

    Practical steps include: Make sure the seven key principles in the GDPR are at the heart of your system, with a particular emphasis on Article 22 of the GDPR in relation to solely automated decision-making.
     
  5. Help users and citizens understand how it impacts them. Work on the basis of a ‘presumption of publication’ for all algorithms that enable automated decision-making, notifying citizens when a process or service has automated decision-making with plain English explanations (all exceptions to that rule agreed with government legal advisors before ministerial authorisation).

    Practical steps include: Share information about automated decision-making incidents through collaborative and appoint an accountable officer to respond to citizen queries in real time.

  6. Ensure that you are compliant with the law. Ensure that your algorithm or system adheres to the necessary legislation and has full legal sign-off from relevant government legal advisors.

    Practical steps include: Engage with legal advisors early in the development process of an algorithm or system.

    7. Build something that is future proof. Continuously monitor the algorithm or system, institute formal review points (recommended at least quarterly), and end user challenge to ensure it delivers the intended outcomes and mitigates against unintended consequences that may develop over time (referring to points 1 to 6 throughout).

    Practical steps include: Establish ‘formal review’ points (at least quarterly), reviewing datasets and establishing whether the policy intent remains the same. Incorporate any new risks into your assessments and adapt to any changes in legislation.