It’s fair to say that enthusiasm for using AI (in some quarters at least) is growing at a significant pace driven, in large part, by the quest for time and cost-savings. The speed of pace, coupled with ever-evolving technology, has made it difficult for governments across the globe to decide how best to regulate its use. That said, signs of progress appear to be afoot.
For example, in the EU, the Artificial Intelligence Act, which aims to create a statutory framework regulating the use of AI within the EU, has recently been approved (in relation to which, see our series of blog posts here). The UK government, on the other hand, has steered away from a legislative approach to AI-specific regulation, choosing instead, for the time being at least, to adopt a non-statutory, principles-based approach, in a bid to be more nimble and better able to respond to change in this fast-moving area. (See the UK’s AI White Paper which we previously wrote about here). This means, for now, the UK will be relying on existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and the Employment Agency Standards Inspectorate – to move forward protections in this area. Regulators will be expected to interpret and apply five new “values-focused cross-sectoral principles” to address any AI risks which fall within their remits in accordance with existing laws and regulations. That said, we may see the legislative position evolve with Michelle Donelan, Secretary of State for Science, Innovation and Technology (DSIT), having reportedly indicated recently that the government would take steps to legislate once the risks of AI become apparent.
Meantime, an example of the principles-based approach in action is the recently issued guidance on ‘Responsible AI in Recruitment’, created by DSIT. The guidance has been issued, in part, off the back of the AI White Paper, as well as in response to the DSIT’s ‘Industry Temperature Check’, a previous stakeholder engagement study, which identified that HR and recruitment organisations would benefit from clearer guidance on how AI ‘assurance mechanisms’ can be used to support the responsible procurement of AI systems.
Now if you are not absolutely up to speed on all things AI, the good news is the guidance has been drafted with you in mind! It’s stated as being accessible for readers with minimal understanding of AI technologies and takes you through a staged process of assessing risks and deploying assurance mechanisms at each stage of the AI procurement and deployment process.
Before looking at what this guidance might mean on the ground, why are we concerned about the use of AI in recruitment, in particular? It’s because HR and resourcing teams have been quick to adopt the AI and automation tools on offer as a way of speeding up the process of managing and whittling down large numbers of applications. Helpfully for those less familiar with the available technologies, the guidance contains an annexe setting out a number of example use cases.
At the more basic end of the spectrum, are CV screening tools, which sift and evaluate CVs against keywords and criteria defined by the recruiting employer. At the more sophisticated end, you might find facial recognition tools that can be used in interviews to assess a candidate’s emotions, engagement, and ‘desirable’ behaviours evidenced through their expression and tone during the interview.
As useful as such tools can be, they also pose novel risks in terms of the potential for bias, discrimination, and digital exclusion (the latter being where candidates lack the requisite knowledge of, or access to, technology). Take the example of the CV screening tool – using this tool risks reinforcing existing biases where the software is ‘trained’ using an employer’s historical recruitment data, in circumstances where inherent bias is already present in the data set. As is noted in the guidance, facial recognition software has been shown to have differing levels of error rates in terms of recognition, with the poorest accuracy found in subjects who are female, black, aged 18-30 or who have facial paralysis. This software, therefore, has the potential to pose discrimination risks.
The purpose of the new guidance is to identify the potential ethical risks of AI in recruitment and to outline how ‘AI assurance mechanisms’ can be used to assess those risks and help ensure compliance with statutory and regulatory requirements.
So, what do we mean by ‘AI assurance mechanisms’? Well, put simply, this refers to measures used to assess the trustworthiness of AI systems. Take again the CV screening tool – what do you need to consider in terms of risk, and how might you go about assessing those risks? One example would be to conduct an algorithmic impact assessment. There is no universally agreed definition or form of an algorithmic impact assessment, however in the same way as a data protection impact assessment aims to assess the particular data protection risks posed by a particular project, an algorithmic impact assessment would involve a self-assessment of any existing or proposed system that is based on AI or which includes an element of automated decision-making and which would seek to evaluate the impact of that system in terms of, for example, the potential for bias. Part of that assessment might include, for example, engaging with key stakeholders (including potential job applicants) to identify any risks. The assessment might also look at all keywords and directions you’re proposing to use to ‘train’ the software, to identify in advance any potential for bias. As noted, there is no prescribed process for conducting an algorithmic impact assessment, however the Institute for the Future of Work has written about algorithmic impact assessments here.
The main takeaway from the guidance is the importance of carefully assessing the potential risks posed by your use of AI throughout both the procurement and deployment process. Helpfully for employers, there is no suggestion that organisations should be using each and every available assurance measure when seeking to implement AI systems. Some organisations may have ready access to the resources required to deploy a broad spectrum of assurance measures at each stage of the process, whereas others may only have the resource to assess the key, highest risk areas. The key point, however, is that some form of risk analysis and testing should be conducted, to mitigate the inherent risks posed by AI.
The consequences of deploying AI tools that are insufficiently risk assessed in advance are not easily quantifiable, nor has this been widely tested as yet in the courts. However, in circumstances where an unintended impact of an AI tool might give rise to discrimination, for example, we expect that a Tribunal may take into account an employer’s failure to suitably risk assess the AI tool prior to or during its deployment when considering any defence that the employer may offer to such a claim.
Whilst the guidance is specifically aimed at putting in place assurance measures during the recruitment process, best practice would dictate that it is sensible to go through a similar process when deploying AI elsewhere in your organisation, where its use will impact on your people.
If your organisation is considering implementing an AI-based technologies, or indeed already using AI, and you require advice on how to go about assessing the legal risks posed, please do not hesitate to get in touch with your usual Burges Salmon contact.
Adopting Artificial Intelligence (AI)‑enabled tools in HR and recruitment processes offers the automation and simplification of existing processes, promising greater efficiency, scalability, and consistency. However, these technologies also pose novel risks, including perpetuating existing biases, digital exclusion, and discriminatory job advertising and targeting.
https://www.gov.uk/government/publications/responsible-ai-in-recruitment-guide