On 31 October 2023, the Law Society set out artificial intelligence (AI) recommendations which it urged the government to consider ahead of the AI Safety Summit.

The Law Society emphasised that whilst it acknowledged the potential of AI, it was concerned over the challenges to the legal and justice sectors that AI presented. In this regard, it urged the government to adopt a nuanced, balanced approach to the development and application of AI in the legal and justice sectors. This follows its approach taken in its response to the UK Government White Paper on AI Regulation, which we discuss further here.

The Law Society's key recommendations are summarised as follows:

  1. Blended Approach to Regulation. The UK government should introduce a blend of adaptable, principle-based regulation and firm legislation to safeguard societal interests while not impeding technological progression. Regulators and the UK workforce should be directed towards benefitting from AI opportunities.
  2. Focus of Legislation. Legislation should focus on and clearly define ‘high-risk contexts’, ‘dangerous capabilities’ and ‘meaningful human intervention’ in AI. Any incoming legislation should emulate the forthcoming EU AI act with legislation to establish parameters for unacceptable AI use, and indicating where it is inappropriate for AI to be central in decision-making.
  3. Expertise of Legal Profession. This should be recognised and harnessed in any regulatory approach to AI. Legal professional privilege must be protected in the future regulation of AI. 
  4. Drivers of Economic Growth. It recommended that this could be achieved by providing clarity on procurement practices, supporting the role of insurers, providing a clear position on intellectual property and targeted support for SMEs. 
  5. Boosting Public Trust. This could be achieved by mandatory transparency for the use of AI in government or public services, an enhanced disclosure and due diligence system, prioritising accessibility, ensuring competence, emphasising evidence-based performance metrics.
  6. Enhancing Accountability. This could be driven through regulator-guided appeal mechanisms. It urged government to require that an AI officer be established in legal entities of a certain size or that operate in high-risk areas. 
  7. Confidentiality and Sensitive Information. This should be prioritised and protected in the future regulation of AI and in the use of AI systems. 

It also recommended alignment in regard to AI across sectors and internationally, which mirrored the main focus of the AI Safety Summit; the aim to achieve a consistent international approach in safe and transparent AI development. We discuss the Summit further here.

If you have any questions or would otherwise like to discuss any issues raised in this article, please contact David VarneyTom Whittaker or any other member in our Technology team.

This article was written by Victoria McCarron