The Council of Europe - European Commission for the efficiency of justice (CEPEJ) Working group on Cyberjustice and Artificial Intelligence issued a note to give some preliminary thought to what judges and other public sector justice professionals can expect from the use of generative AI tools in a judicial context.

Here we summarise the key points which are a useful list of key risks, do's and don'ts, and interesting to compare and contrast against guidance on AI published in the England & Wales (here).

What are the risks?

The report identifies:

  1. Potential production of factually inaccurate information (false answers, “hallucinations” and bias)
  2. Possible disclosure of sensitive data and risk of confidentiality
  3. Lack of references for the information provided and potential violation of intellectual property and copyright
  4. Limited capability of providing the same answer to an identical question
  5. Potential replication of outputs
  6. Varying stability and reliability of Generative AI models for critical and time-sensitive processes
  7. Exaggeration of cognitive biases

How should it be applied?

The report identifies the following:

  1. Make sure that the tool’s use is authorised and appropriate for the desired purpose. 
  2. Bear in mind that it is only a tool and try to understand how it works (be aware of human cognitive biases). 
  3. Give preference to systems that have been trained on certified and official data, the list of which is known, to limit the risks of bias, hallucination, and copyright infringement.
  4. Give the tool clear instructions (prompts) about what is expected of it. 
  5. Enter only non-sensitive data and information which is already available in the public domain. 
  6. Always check the correctness of the answers, even in case references are given (especially check the existence of the reference).
  7. Be transparent and always indicate if an analysis or content was generated by generative AI. 
  8. Reformulate the generated text in case it shall feed into official and/or legal documents. 
  9. Remain in control of your choice and the decision-making process and take a critical look at the made proposals.

When should generative AI not be applied? 

The report identifies the following:

  1. In case you are not aware of, do not understand or do not agree to the terms and conditions of use.
  2. In case it is forbidden/against your organisational regulations.
  3. In case you cannot assess the result for factual correctness and bias.
  4. In case you would be required to enter and thus disclose personal, confidential, copy right protected or otherwise sensitive data.
  5. In case you must know how your answer was derived. 6. In case you are expected to produce a genuinely self-derived answer.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, David VarneyMartin Cook or any other member in our Technology team.