There is relatively little publicly available insight into when and where the courts are seeing AI being used in court proceedings. However, in a recent judgment, a judge referred in passing to an instance of an AI bot trying to join a remote hearing.
The judgement in Perseus Ventures Limited v Foskett & Ors 2024] EWHC 3006 (KB) concerned the issue of whether the claimant's application for disclosure should be conducted remotely. The claimant, based in Hong Kong. The court ultimately ruled against allowing a remote hearing, emphasising the importance of providing adequate reasons for such requests.
The court considered a number of non-exhaustive factors when reaching its decision. One of those factors was the court’s ability to safeguard the integrity of the hearing. In that context, the judge referred to an apparently separate case in which an AI bot tried to join a remote hearing:
33. The vexed issue of the recording of court hearings may be complicated by hearings being remote. I have recent experience of a remote hearing where one of the "people" (and I use that word in inverted commas) who purported to join the hearing was in fact an AI Bot which will record and transcribe that which it "hears" over the connection. (It would appear that HMCTS software in fact stopped the Bot from joining the hearing.) For obvious reasons, it is harder to know whether somebody is recording a hearing when that hearing takes place remotely, and it is harder to police what happens if there is non-compliance.
It appears from the judge’s brief comments that the presence of AI in litigation is one to be treated with caution but not outright refusal. This is consistent with the cases in England & Wales which confirm 1) the court could not accept AI-generated evidence about potential search terms without evidence from the party seeking to rely on it as to what weight to give the AI-generated content (see here), 2) the court being concerned about apparently fictitious AI-generated case law (the Harber case, here).
There was no suggestion in the case that the person who applied to attend remotely may jeopardise the integrity of the hearing. Instead, the judge raised the above experience as a theoretical possibility relevant to the issue of hearing integrity. The courts are aware of the need to ‘be aware that court/tribunal users may have used AI tools’. That is one of the messages in HMCTS guidance to judicial court holders on the use of AI in court proceedings (see here). That guidance did not recognise an AI bot attending a hearing as one of the (non-exhaustive) examples of potential uses of generative AI in courts and tribunals, potentially reflecting that it is difficult to predict every scenario in which AI could be used in court proceedings.
As the courts develop their experience of AI being used in court proceedings, we can expect further judicial commentary and judgments on the topic in 2025.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.
This article was written by David Harrison.