On 31 October 2023, the Law Society set out artificial intelligence (AI) recommendations which it urged the government to consider ahead of the AI Safety Summit.
The Law Society emphasised that whilst it acknowledged the potential of AI, it was concerned over the challenges to the legal and justice sectors that AI presented. In this regard, it urged the government to adopt a nuanced, balanced approach to the development and application of AI in the legal and justice sectors. This follows its approach taken in its response to the UK Government White Paper on AI Regulation, which we discuss further here.
The Law Society's key recommendations are summarised as follows:
- Blended Approach to Regulation. The UK government should introduce a blend of adaptable, principle-based regulation and firm legislation to safeguard societal interests while not impeding technological progression. Regulators and the UK workforce should be directed towards benefitting from AI opportunities.
- Focus of Legislation. Legislation should focus on and clearly define ‘high-risk contexts’, ‘dangerous capabilities’ and ‘meaningful human intervention’ in AI. Any incoming legislation should emulate the forthcoming EU AI act with legislation to establish parameters for unacceptable AI use, and indicating where it is inappropriate for AI to be central in decision-making.
- Expertise of Legal Profession. This should be recognised and harnessed in any regulatory approach to AI. Legal professional privilege must be protected in the future regulation of AI.
- Drivers of Economic Growth. It recommended that this could be achieved by providing clarity on procurement practices, supporting the role of insurers, providing a clear position on intellectual property and targeted support for SMEs.
- Boosting Public Trust. This could be achieved by mandatory transparency for the use of AI in government or public services, an enhanced disclosure and due diligence system, prioritising accessibility, ensuring competence, emphasising evidence-based performance metrics.
- Enhancing Accountability. This could be driven through regulator-guided appeal mechanisms. It urged government to require that an AI officer be established in legal entities of a certain size or that operate in high-risk areas.
- Confidentiality and Sensitive Information. This should be prioritised and protected in the future regulation of AI and in the use of AI systems.
It also recommended alignment in regard to AI across sectors and internationally, which mirrored the main focus of the AI Safety Summit; the aim to achieve a consistent international approach in safe and transparent AI development. We discuss the Summit further here.
If you have any questions or would otherwise like to discuss any issues raised in this article, please contact David Varney, Tom Whittaker or any other member in our Technology team.
This article was written by Victoria McCarron
"The legal profession plays an integral role in shaping the future of AI regulation. We recognise AI’s potential to transform lives, boost the economy and increase access to justice. However, our members need further clarity on legislation, procurement practices and how discrepancies across sectors will be mitigated to enable the profession to make the most of these technologies.”