Researchers from the University of Oxford, University of Cambridge, University of Copenhagen and National University of Singapore, have published in the journal Nature Machine Intelligence an ethical framework for the use of AI for academic research and writing (here).

The framework has been proposed after concerns over plagiarism, authorship attribution, the nature of the original work and the integrity of academia when using AI.  As such the ethical guidelines aim to help maintain the standard and credibility of academic writing. 

The research sets out three overarching criteria in maintaining responsible use of AI, specifically Large Language Models (LLMs), in academic scholarship:

1. Human vetting and guaranteeing

The guidelines propose that at least one author should be able to guarantee and be responsible for the accuracy of academic writing. This includes each substantive claim and any supporting evidence. This is important given that there are often risks of bias, logical incoherence and inaccuracies in the material produced by LLMs. An author must be active in fact checking and reviewing any claims, arguments or evidence provided by LLMs.

2. Substantial human contribution 

An author of academic writing must have substantially contributed to the material.  The criteria has been based on the International Committee of Medical Journal Editors standards. That substantial contribution must be to i) the conception or design, ii) the acquisition, analysis or interpretation of the data or source materials, or iii) the design of the prompts or fine-tuning process.

3. Acknowledgement and transparency 

The use of AI for assisting in writing should be acknowledged. This will help in understanding, verifying, replicating or judging the credibility of work and ensuring authorship credit and responsibility has been given appropriately. 

Specifically on transparency, the framework suggests the use of an LLM declaration.  That declaration should be 

'sufficient to realize the essential purposes of transparency — enabling independent evaluation of findings and ensuring appropriate attribution of credit — while remaining practical and efficient. To those ends, we suggest that authors submit a short, standardized generative AI use declaration with their submissions (adapted appropriately for each field of research, as well as to each type of research or output within a given field), as we have done for this article.

This standardized statement serves to testify to authors’ adherence to the three essential criteria for ethical use of LLMs in academic writing described in this manuscript. It should be included in the manuscript at submission, and adapted as relevant to each field of research, as well as to each type of research or output within a given field.

The template declaration:

“Any use of generative AI in this manuscript adheres to ethical guidelines for use and acknowledgement of generative AI in academic research. Each author has made a substantial contribution to the work, which has been thoroughly vetted for accuracy, and assumes responsibility for the integrity of their contributions.”

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.

With thanks to Molly Taylor for drafting.