In early 2024 the First Tier Tribunal considered, amongst other things, whether evidence generated by ChatGPT could be used in court about potential keywords to search for documents to imply that the keywords actually used were too narrow.

The key points about whether ChatGPT evidence could be used in evidence were:

  • context - the Appellant made a FOIA request of the Department for Work and Pensions (DWP).  DWP said it held some of the information but did not provide it (and after an internal review, upheld that decision).  The Appellant complained to the ICO and the ICO responded that DWP did not hold any additional information within the scope of the Request, noting the terms and scope of searches DWP applied (the ICO's decision).  The judgment concerned an appeal of the ICO's decision.
  • searches- amongst other things, the Tribunal considered evidence the Appellant introduced in the appeal based on an extract of a legal text and evidence from ChatGPT. The Appellant explained the methodology as "the Appellant copied the text [of the Request] into ChatGPT and asked for it to write a list of the top 10 keywords for the FOI request if someone intended to search a database to meet that request".  The Appellant asserted that the terms produced by the ‘independent AI system’ identifies searches which were very different from those DWP used. The Appellant used ChatGPT's outputs to help allege that the searches of DWP's documents in response to the FOIA were too narrow because they focussed on one issue only (filming) and not another issue in the FOIA request (evidence collection).
  • judgment - the Tribunal said in respect of the ChatGPT evidence:

Firstly, we must assess the weight that we give to the ChatGPT evidence. We place little weight upon that evidence because there is no evidence before us as to the sources the AI tool considers when finalising its response nor is the methodology used by the AI tool explained. If comparisons are drawn to expert evidence, an expert would be required to explain their expertise, the sources that they rely upon and the methodology that they applied before weight was given to such expert evidence. In the circumstances we give little weight to the ChatGPT evidence that searches should have been conducted in the form set out within that evidence.

  • the Tribunal also concluded that DWP's enquiries of staff, sources searched, and search terms used were, on the whole, reasonable, proper and effective to identify information within the scope of the FOIA request.

Parties need to carefully consider (amongst other things) their legal obligations to search and, consequently, how they search for potentially relevant documents.  Sometimes, evidence is required to demonstrate that a search strategy and methodology was appropriate or to make the case it was not.  This case is a useful illustration of how courts will approach the use of AI systems with caution, but, as implied, the possibility that in the right circumstances AI systems could provide evidence of some weight.

The case is Oakley v Information Commissioner [2024] UKFTT 315 (GRC) (18 April 2024).

For more information about the law, technology and practice of disclosure, contact Tom Whittaker or David Hine.