The Bar Council has issued new guidance* for barristers navigating the growing use of ChatGPT, and other generative artificial intelligence (AI) large language model systems (LLMs).
It concludes that “there is nothing inherently improper about using reliable AI tools for augmenting legal services, but they must be properly understood by the individual practitioner and used responsibly.”
LLMs have not been around long enough and have not been sufficiently tested for it to be clear what tasks they can or should be used for in legal practice. Some practitioners and judges have made positive comments about using them to arrange text. However, it is important for barristers who choose to use LLMs to do so responsibly and think about what they are doing, by weighing the potential risks and challenges associated with such use in the light of their professional responsibilities.
The document identifies key risks with LLMs:
- anthropomorphism - LLMs may give the user the impression that they are engaging with a human, but LLMs (currently) ‘do not have human characteristics in any relevant sense’.
- hallucinations - where the output sounds plausible but is either factually incorrect or related to the context;
- information disorder - where the LLM produces misinformation (for example, see A cautionary tale of using AI in law; UK case finds that AI generated fake case law citations)
- bias in data training - ‘The fact that the training data is trawled from the internet means that LLMs will inevitably contain biases or perpetuate stereotypes or world views that are found in the training data’; and
- mistakes and confidential data training - 'anything that a user types into the system is used to train the software and might find itself repeated verbatim in future results. This is plainly problematic not just if the material typed into the system is incorrect, but also if it is confidential or subject to legal professional privilege'.
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Tom Whittaker, Brian Wong, David Varney, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).
* the document includes an important notice at the end that it does not constitute ‘guidance’ under the Bar Standards Board (BSB) Handbook, the BSB nor Legal Ombudsman is bound by it, and it does not constitute giving legal advice.
The growth of AI tools in the legal sector is inevitable and, as the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity. Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws - Sam Townend KC, Chair of the Bar Council