The Ada Lovelace Institute - an independent research institute which examines ethical and social issues arising from the use of data, algorithms and artificial intelligence - recently held a roundtable discussion with representatives from Canada, New Zealand and New York on how their governments and regulators approach the use of algorithms by public bodies.  In this post we summarise some of the points that were discussed regarding the Canadian and New Zealand approaches.  Whilst the focus was on how government bodies use algorithmic decision-making, many of the points are likely to be of interest to those in the private sector because calls for similar measures are being made in the UK (see our recent note on the CDEI's good governance proposals).

1. Transparency is needed.  Transparency means many things and will depend on the circumstances, but as a starting point regulators and public bodies need to know whether or not, and to what extent, algorithmic decision-making is being used.  Only then can risks be identified and managed.

The Canadian Directive on Automated Decision-Making - which applies to any system, tool, or statistical models used to recommend or make an administrative decision by the Government of Canada (with exceptions for some offices) to a member of the public - sets out transparency requirements.  For example, the Government of Canada retains the right to access and test the automated decision system, including all released versions of proprietary software components, in case it is necessary for a specific audit, investigation, inspection, examination, enforcement action, or judicial proceeding, subject to safeguards against unauthorized disclosure.  Source code owned by the Government of Canada may be released, subject to limited exceptions.

2. Risk assess both in advance and whilst the algorithmic decision-making system is used.  The Canadian Directive requires Algorithmic Impact Assessments (AIAs) to be completed prior to the production of the automated decision-making system and to be updated as and when they change.   AIAs are defined as "A framework to help institutions better understand and reduce the risks associated with Automated Decision Systems and to provide the appropriate governance, oversight and reporting/audit requirements that best match the type of application being designed."  

New Zealand's Algorithm Charter uses Algorithm Assessment Reports to help government bodies focus on those uses of algorithms that have a high or critical risk of unintended harms for New Zealanders.   The Charter recognises that "very simple algorithms could result in just as much benefit (or harm) as the most complex algorithms depending on the content, focus and intended recipients of the business processes at hand."   Different measures will be required depending on the risk.

Quality assurance measures may also need to be put in place before and during algorithmic decision-making is used.   Under the Canada Directive this involves consulting with lawyers to check that legal requirements are met, and may involve peer review, such as independent third-party auditing. 

3. Promoting public trust.  The New Zealand charter includes a commitment to identify and actively engage with people, communities and groups who have an interest in algorithms, and consulting with those impacted by their use.  In Canada, some decisions are so high risk that they have to have a "human-in-the-loop": such decisions "cannot be made without having specific human intervention points during the decision-making process; and the final decision must be made by a human".

4. Explanations as to how decisions were made may be required.   What these look like will, again, depend on the circumstances.  The New Zealand Algorithm Charter, signed by 26 of the country's government ministries and agencies, says that this may include: plain English documentation of the algorithm; making information about the data and processes available (unless a lawful restriction prevents this); publishing information about how data are collected, secured and stored.  The Canadian Directive requires that explanations after a decision must be "meaningful".

5. Peer review.  High-risk uses of algorithmic decision-making in Canada may require independent third-party auditing or peer review.

6. Finally, ensuring that those subject to algorithmic decision-making are provided with their options for recourse should they wish to challenge a decision.

 A recording of the roundtable discussion is here.  The Ada Lovelace institute also held a recent roundtable on the UK's National Data Strategy, which looked at how to ensure transparent and accountable algorithmic decision-making by public bodies.

This article was written by Eve Jenkins and Tom Whittaker.