HM Courts and Tribunals Service in England & Wales has published guidance for judicial office holders regarding AI.
The guidance has been developed to assist and applies to all judicial office holders, their clerks and other support staff, in relation to the use of Artificial Intelligence (AI). It sets out key risks and issues associated with using AI and some suggestions for minimising them, and emphasises that any use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice. Consequently, it is also useful for users of the court system.
The guidance recognises that AI in the law is not new; technology assisted review (and other analytical methods) is now firmly a part of electronic disclosure. Instead, the guidance recognises that as the use of AI increases in society, so does its relevance to the court and tribunal system.
The guidance covers the following areas:
- Understand AI and its implications - in particular; the quality of answers depends on how the AI is engaged with. Even with the best prompts, AI may not provide answers from authoritative databases or accurate information;
- Uphold confidentiality and privacy - private and confidential information should not be entered into a public AI service; anything that is typed should be treated as being published to all the world
- Ensure accountability and accuracy - AI output should be checked before it is used or relied upon.
- Be aware of bias - AI output may reflect biases in the training data;
- Maintain security - follow best practices.
- Take responsibility - judicial office holders are personally responsible for material which is produced in their name.
- Be aware that court/tribunal users may have used AI tools -
- ‘all legal representatives are responsible for the material they put before the court/tribunal and have a professional obligation to ensure it is accurate and appropriate. Provided AI is used responsibly, there is no reason why a legal representative ought to refer to its use, but this is dependent upon context.’
- ‘Until the legal profession becomes familiar with these new technologies, however, it may be necessary at times to remind individual lawyers of their obligations and confirm that they have independently verified the accuracy of any research or case citations that have been generated with the assistance of an AI chatbot.'
The guidance also specifies indications that work may have been produced by AI:
- references to cases that do not sound familiar, or have unfamiliar citations (sometimes from the US)
- parties citing different bodies of case law in relation to the same legal issues
- submissions that do not accord with your general understanding of the law in the area
- submissions that use American spelling or refer to overseas cases, and
- content that (superficially at least) appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors
The guidance is the first step in a proposed suite of future work to support the judiciary in their interactions with AI. The working group who prepared the guidance intends to consider publishing a supporting FAQ document.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler, Martin Cook or any other member in our Technology team.