The Law Society – the independent professional body for solicitors in England & Wales – has responded to the recent UK White Paper: “A pro-innovation approach to AI regulation” (see our article on White Paper and flowchart for navigating the White Paper).

The Law Society's response must be seen in the context of its role.  Its 'overarching purpose is to safeguard the rule of law in the best interests of the public and the client[, ...] driven by [...] core objectives to promote access to justice, safeguard the rule of law, promote diversity and inclusion, the international practice of law and to support our members businesses.'  The response focuses on two contexts - justice and legal services - but its responses may have application by analogy to other sectors and contexts.

In summary, the Law Society:

  • supports the pragmatic nature of allowing each regulator to adopt an approach suitable to their expertise and domains. However, clarity is needed on how discrepancies across sectors and regulators will be mitigated and how the legal profession can extend services overseas in the face of differing AI legislation across jurisdictions.
  • emphasises that "confidentiality is a core value within the legal profession, with sensitive information protected by legal professional privilege. It is crucial that this is protected in the future regulation of AI and in the use of AI systems."
  • considers that "the assignment of liability in scenarios where an AI-driven LawTech product causes harm remains an area of contention and has important access to justice implications. There is an urgent need for explicit regulations delineating liability across the AI lifecycle, and for guidelines which clearly describe under which circumstances an entity may be held liable for the outcomes of an AI system."

The Law Society recommends:

  1. Taking a blended approach to regulating AI, including:
    • that the Government introduces legislation focusing on inherently high-risk contexts and dangerous capabilities. High-risk examples include, for example: if the AI system implementation has immediate effects without human review; the application of the AI is a regulated activity; if AI significantly influence human rights; or if they present substantial potential or actual harm. The status of the affected individuals' vulnerability should also be considered. This would establish parameters where the use of AI is unacceptable or where it is inappropriate for AI to make zero-sum decisions.
    • enhancing accountability through regulator-guided appeal mechanisms.
    • establishing the role of an AI officer within legal entities of a certain size, operating in high-risk areas, or developing AI systems with dangerous capabilities. The AI officer would need to understand legal and ethical frameworks, technical understanding and understanding of standards and security controls, understanding of data protection frameworks, engage with the board.
  2. Recognising and harnessing the expertise of the legal profession in the AI regulatory approach.  This would include engaging in expert groups and improving legal and ethical education
  3. Driving economic growth through greater clarity on AI procurement, improving the insurance market for AI systems; clarifying the position on IP and AI; and delivering targeted support to small and medium enterprises.
  4. Boosting public trust including by:
    • creating mandatory transparency for the use of AI in government or public services, so that every citizen and consumer is aware of when and how decisions affecting them are made or informed by AI systems, and where harms are a substantial effect, that a human alternative review is mandated.
    • exploring an enhanced disclosure & due diligence system - establishing a comprehensive due diligence system that mandates organisations to proactively identify, track, and manage AI-related risks.
    • including interpretability alongside transparency and explainability.  The Law Society state that when AI systems generate erroneous predictions, it should be understood not only why a particular decision was incorrect, but also that an error within the decision-making mechanism or underlying connections is located.
    • including accessibility alongside fairness.  The principle of fairness, although defined under data protection law, does not currently include an accessibility-by-design requirement for disabled users of products and services. The Law Society advocate for the principle of fairness to include accessibility, potentially by adopting the FCA’s accessibility requirement.
    • ensuring on-going competence and capability for holding AI accountable. The individual overseeing AI should possess adequate means, skills, credentials, resources, and domain-specific knowledge to scrutinise AI in the relevant context. The person should also have the ability, authority, and trust to override, question, or interrogate an AI decision-making mechanism or output.  However, there will remain a need for collective responsibility over strategic and ethical decisions in addition to the expert advice an AI Officer can provide.
  5. Having a proportionate and world leading framework by reducing divergence, duplication, and fragmentation across four areas:
    • clarifying roles and responsibilities to align human practice:
    • aligning human values and AI objectives.  Clear objectives for AI systems should be defined and continuously reassessed for alignment with human values.
    • cross-sector Alignment.  The Law Society encourages alignment of the white paper principles with existing data protection norms in the UK and other jurisdictions, echoing ICO's efforts.
    • international alignment. The Law Society considers that the definition of AI should harmonise with international benchmarks, like the OECD definition or that stated in the National Security and Investment Act to ensure global understanding and interoperability
  6. Building UK workforce and regulator capability to take advantage of AI opportunities.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, or any other member in our Technology team.