The Council of Europe - European Commission for the efficiency of justice (CEPEJ) Working group on Cyberjustice and Artificial Intelligence issued a note to give some preliminary thought to what judges and other public sector justice professionals can expect from the use of generative AI tools in a judicial context.
Here we summarise the key points which are a useful list of key risks, do's and don'ts, and interesting to compare and contrast against guidance on AI published in the England & Wales (here).
What are the risks?
The report identifies:
- Potential production of factually inaccurate information (false answers, “hallucinations” and bias)
- Possible disclosure of sensitive data and risk of confidentiality
- Lack of references for the information provided and potential violation of intellectual property and copyright
- Limited capability of providing the same answer to an identical question
- Potential replication of outputs
- Varying stability and reliability of Generative AI models for critical and time-sensitive processes
- Exaggeration of cognitive biases
How should it be applied?
The report identifies the following:
- Make sure that the tool’s use is authorised and appropriate for the desired purpose.
- Bear in mind that it is only a tool and try to understand how it works (be aware of human cognitive biases).
- Give preference to systems that have been trained on certified and official data, the list of which is known, to limit the risks of bias, hallucination, and copyright infringement.
- Give the tool clear instructions (prompts) about what is expected of it.
- Enter only non-sensitive data and information which is already available in the public domain.
- Always check the correctness of the answers, even in case references are given (especially check the existence of the reference).
- Be transparent and always indicate if an analysis or content was generated by generative AI.
- Reformulate the generated text in case it shall feed into official and/or legal documents.
- Remain in control of your choice and the decision-making process and take a critical look at the made proposals.
When should generative AI not be applied?
The report identifies the following:
- In case you are not aware of, do not understand or do not agree to the terms and conditions of use.
- In case it is forbidden/against your organisational regulations.
- In case you cannot assess the result for factual correctness and bias.
- In case you would be required to enter and thus disclose personal, confidential, copy right protected or otherwise sensitive data.
- In case you must know how your answer was derived. 6. In case you are expected to produce a genuinely self-derived answer.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, David Varney, Martin Cook or any other member in our Technology team.