The Equality and Human Rights Commission (EHRC) has responded to the UK Government's update on its White Paper on AI regulation. Whilst remaining broadly supportive of the principles-based approach set out in the White Paper, the EHRC maintained that these principles lack sufficient emphasis on equality and human rights. Here we summarise the key points.
EHRC's position in 2023
First, a recap of EHRC's response in June 2023 to the UK government's original White Paper.
Support for Principles-Based Approach
Within the EHRC’s response in June 2023 to the UK government's original White Paper, EHCR stated its ambition to act as an effective regulator of AI, supporting the White Paper’s approach of promoting AI innovation whilst ensuring safe and responsible use of the technology.
The response agreed with the White Paper’s position that AI should remain undefined due to the changing technologies in this area and be instead identified according to its characteristics.
Additionally, the response recognised the White Paper’s five key principles of Safety, Security and Robustness, Transparency and Explainability, Fairness, Accountability and Governance, and Contestability and Redress, expressing support for this principles-based framework for regulating AI. In particular, the EHRC pinpointed Fairness as the key principle that underpinned their work on equality and human rights.
Human Rights: Risks posed by AI
Whilst the ECHR approved the White Paper’s direct acknowledgement of the link between Fairness and human rights, it criticised the paper for failing to link any other principles to human rights.
Accordingly, the EHRC said that the White Paper did not appropriately cover the full range of human rights within the UK. In particular, it highlighted the right to privacy; the White Paper had not sufficiently acknowledged that this was a broader concept than data security or data protection, as AI raised new possibilities such as Facial Recognition Technology being utilised for constant surveillance.
Furthermore, EHRC said that the White Paper had failed to adequately consider methods around ensuring all individuals had routes to redress for AI related harms. In its response, the EHRC discussed potential obstacles to individuals who might need to challenge discrimination in AI, such as access to legal aid.
The response also noted that the White Paper made a limited reference to equality, with no reference to regulators’ own Public Sector Equality Duty obligations, despite the widely acknowledged risk of discrimination from AI systems.
Therefore, despite supporting the general aims of the White Paper, the EHRC expressed concerns that it had not adequately covered the potential risks and issues posed by AI that were already foreseeable and would only become more apparent over time.
Implementation and Future Approach
The response suggested a reworking of the proposals set out in the White Paper. These should instead reflect the principles of the OECD, and the UNESCO Recommendation on the Ethics of AI, which placed more emphasis on human rights. The EHRC also highlighted that organisations should use the higher threshold of ‘meaningful’ transparency established by the OECD principle when setting out the usage and purpose of AI.
Generally, the response welcomed the White Paper’s commitment to a non-statutory implementation of its key principles. However, the EHRC remained open to a statutory approach if necessary. The response noted that if a statutory duty was eventually placed on regulators to have due regard to the principles, there needed to be consideration of existing frameworks and duties, such as those set out in the Human Rights Act 1998 and the Equality Act 2010.
2024 Update
On 30 April 2024, the EHRC released an update to this response, outlining that it would be taking a principles based approach rather than examining the specific risks of AI across all sectors. Accordingly, the ECHR would focus on issues that it believed presented the greatest issues to equality and human rights.
The EHRC emphasised that it would be continuing its collaboration with other regulators such as the ICO and the DRCF, particularly in relation to the Public Sector Equality Duty.
Moving forward, the EHRC indicated that it would be prioritising specific work programmes, including engaging with police forces and oversight bodies on improving their compliance with the use of Facial Recognition Technology (FRT), exploring the potential discrimination as a result of online recruitment through social media platforms, supporting litigation to challenge the potential discriminatory use of FRT in the workplace, and publishing relevant guidance.
EHRC: Visualisation of its role in AI Regulation
The EHRC urged the government to recognise and fund the enhanced role of the Commission, given the urgent need to respond to the human rights and equality implications posed by AI; this could help regulators across all sectors support their own Public Sector Equality Duty and Human Rights Act obligations. It also emphasised the benefits of the specialised knowledge offered by individual regulators. Given more funding and governmental support, each could address the challenges posed by AI and be in a better position to release guidance within the twelve month timeline suggested by the government.
The response also emphasised the need for effective collaboration between regulators, highlighting the memorandum of understanding between the EHRC and the ICO as a positive step forward, as well as the newly created Digital Regulators Cooperation Forum (DRCF).
Accordingly, regulators such as the EHRC clearly see established positions for themselves in AI Regulation. They are both willing to engage with the regulatory framework as currently suggested, whilst wishing to draw attention and suggest solutions to the issues specific to them.
If you have any questions or would otherwise like to discuss any issues raised in this article, please contact Adrian Martin, Ellen Goodland, Tom Whittaker, David Varney, or any member in our Technology team.
This article was drafted by Victoria McCarron and Ellen Goodland