The Institute of Directors (“IoD”) - a professional organisation for company directors, senior business leaders and entrepreneurs - has released a ‘reflective checklist’ with the aim of providing boards a board-level understanding of where organisations stand when it comes to ethical use of Artificial Intelligence (“AI”).

Board level of understanding of the opportunities and risks of AI is essential.  Boards, and companies, are under specific legal duties.  They need to understand what AI they use, how it is used and what risks there are.  Despite this, an IoD members’ survey revealed that: 80% of boards did not have a process in place to audit their AI; and 86% of businesses already use some form of AI without the boards being aware of this.

The checklist outlines 12 principles which are intended to help guide the use of AI throughout an organisation.  We set out the principles and key explanations below.  They will require tailoring to each organisation, any specific AI systems being used, any the relevant legal and regulatory framework that govern that organisation and AI system's use.  

The report was published shortly before the UK's white paper on regulating AI was published. We will have to wait and see whether the IoD consider that its reflective checklist requires updating.

1) Monitor the evolving regulatory environment

Organisations should be aware of existing and prospective legislation affecting AI. Examples of such legislation are:

  • The UK government white paper on AI regulation referred to above;
  • The EU's proposal for the regulation of Artificial Intelligence of 21 April 2021 - the EU AI Act (click here for a one page flowchart);
  • The EU AI Liability Directive, which introduces rules specific to damages caused by AI systems.

Organisations should consider how any regulations, such as data protection or sector-specific regulation, applies to their development and use of AI systems.

2) Continually audit and measure what AI is in use and what they are doing

The ethical principles must be auditable and measurable; they should be embodied in the ISO 9001:2015 quality system (or equivalent suitable system for example ISO/IEC 42001 when ratified) to ensure a consistent approach to the evaluation and use of AI by the organisation.

Organisations should consider whether their AI systems should be on their risk register and whether established board committees (e.g. audit, risk) have the relevant training and resources.

3) Undertake impact assessments which consider the business and the wider stakeholder community

Impact assessments must be undertaken which consider the possible negative effects and outcomes for employees who interact with the AI or whose jobs may be affected. Similarly, impact assessments must be undertaken for stakeholder groups, such as customers, suppliers, partners and shareholders.

4) Establish board accountability

The board is accountable both legally and ethically for the positive use of AI within the organisation including third party products which may embed AI technologies. Board members should be aware of this accountability. The board should hold the final veto on the implementation and use of AI in the organisation.

5) Set high level goals for the business aligned with its values

High-level goals for the use of AI in the organisation must be created in line with its vision, mission and values. Examples of such goals are:

  • augmenting human tasks
  • enabling better, consistent and faster human decisions
  • preventing bias

Are these goals clear, written, measurable?

6) Empowering a diverse, cross functional ethics committee that has the power to veto

An ethics committee should be established at the organisation with the purpose of overseeing AI proposals and implementations. The committee should recommend to the board whether the AI  implementation may have a beneficial effect and understand potential negative impacts. Depending on its assessment of these impacts, it should have the power to veto any proposed us of AI.

7) Document and secure data sources

In the definition of the purpose of the specific AI implementation, the sources of data must be identified and documented.

A clear method of detecting and reporting bias should be developed. If bias is discovered, action should be taken to identify the source and remove it from the AI. KPIs (Key Performance Indicators) must be implemented to keep bias out of the organisation.

8) Train people to get the best out of AI and to interpret the results

Employees should be trained in AI use in order to prevent bias and potential harmful outcomes, and be aware of systems used to monitor and report bias.

9) Comply with privacy requirements

The AI must be designed and audited to ensure compliance with data privacy legislation such as UKGDPR (see our articles on how data protection here).  This entails the training of AI technical teams so that they may adequately challenge AI developers to ensure AI transparency and compliance with the ethics framework. Technical teams should liaise with the ethics committee regarding their findings.

10) Comply with secure by design requirements

The AI must be secure by design and stand the scrutiny of external test and certification processes such as Cyber Essentials Plus. To ensure that data sets used in the Ai cannot be breached, penetration testing may be used.

11) Test and remove from use if bias and other impacts are discovered

The decision to utilise AI rests with the board; so too does accountability for ongoing safe and consistent AI performance. As a result, the board must ensure that AI is tested prior to implementation to ensure compliance with the ethics framework. If the AI is externally sourced, this includes a consideration of whether ethical requirements are engrained in the procurement process

12) Review regularly

Decisions made by AI should be consistently monitored and evaluated against the purpose of the AI and the ethical framework in place.

If the AI deviates from the purpose and ethics in any way, those deviations should:

  • be documented;
  • be reported to the ethics committee;
  • result in corrective actions being implemented in a reasonable period of time.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong. This article was prepared by Callum Payne.