I recently attended the All Party Parliamentary Group on the Future of Work’s event discussing algorithmic impact assessments (“AIAs”), which looked at how AIAs and regulation of AI is developing and being implemented internationally. This follows the Institute for the Future of Work’s recent policy briefing on AIAs, exploring different models of AIAs and the proposed key stages of an AIA.

What is clear from the event this week and the APPG and IFOW’s recent studies is that there is no doubt that artificial intelligence, algorithms and automated decision-making are beginning to impact all aspects of life and, in particular, working lives. However, the regulatory framework in the UK has failed to keep pace with such technological advances. Whilst we have data protection and equality laws, there is no statutory framework in the UK that specifically regulates the use of AI in the workplace.

It was therefore incredibly interesting to hear from Canadian and US counterparts on relevant developments in their respective jurisdictions. Benoit Deshaies of the Treasury Board of the Canada Secretariat spoke of Canada’s existing Directive on the use of automated decision systems and the requirement for AIAs to be conducted. The Canadian AIA is based on a questionnaire model and must be completed by government agencies before the deployment of an automated decision-making system. This is not therefore yet a requirement for industry. Brittany Smith of the US research institute Data & Society spoke of a new Bill that proposes an Algorithmic Accountability Act, which would require companies to assess the impacts of automated systems they use and sell. Brittany also spoke of Data & Society’s recent ‘Assembling Accountability’ report, which maps the challenges of constructing AIAs and is well worth a read.

In view of the lack of regulation in the UK, the APPG’s recent ‘New Frontier’ report has also proposed a new Accountability for Algorithms Act, which would place regulation of AI on a statutory footing and would require an AIA to be conducted prior to any implementation of artificial intelligence-based technologies in the workplace. The recent related policy briefing proposes 4 key stages of an AIA, as follows:

  • Identifying individuals and communities who might be impacted
  • Conducting an ‘ex ante’ risk and impact analysis
  • Taking appropriate mitigation action
  • Continuous evaluation

Implementing a requirement for such AIAs to be conducted would involve a significant shift from the current reactive approach to a far more proactive approach that would require an analysis of impacts from the outset and throughout the lifecycle of any AI-based technologies proposed to be implemented in the workplace.

Whilst there is no immediate sign of statutory regulation in the UK, the EU is in the process of developing their own EU AI Act. Considering the impact that GDPR has had, governments and industry globally will no doubt be keeping a close eye on how the EU’s proposed AI Act develops. In the meantime, businesses should consider what steps they can take now to assess the potential and ongoing impacts of any such technologies both being used currently in the workplace, as well as any new technologies being proposed.