On 24 April 2024, the Office of Qualifications and Examinations Regulation (Ofqual) published its regulatory approach to artificial intelligence (AI). 

This approach was published in response to a letter from the Secretary of State for Education and the Secretary of State for Science and Technology in February of this year, which asked key regulators to outline their strategic approach to AI, and the steps they are taking in line with the expectations set out in the White Paper by 30 April 2024. 

In line with this, Ofqual has established five (5) key objectives around which it intends to structure its AI regulatory work.

  1. Ensuring fairness for students;
  2. Maintaining validity of qualifications;
  3. Protecting security;
  4. Maintaining public confidence; and
  5. Enabling Innovation.

We summarise the key points of Ofqual’s approach below.

Precautionary Principle

Ofqual has adopted a ‘precautionary principle’ to the use of AI. It is intended to ensure that AI is applied and developed in a way that does not threaten the fairness or standards of qualifications, whilst remaining open to complaint AI innovation. Accordingly, this approach is broadly intended to align with the White Paper’s pro-innovation approach to AI. For more information on the content of the White Paper, please see our article here

Co-Regulation

In line with many other sectoral regulators, Ofqual has embraced a collaborative approach to AI, emphasising that it will be collaborating with awarding organisations to understand and control potential harms. Ofqual particularly highlights its engagement with its counterparts in Wales (Qualifications Wales) and Northern Ireland (CCEA Regulation), as well as its engagement internationally through the Alan Turing Institute’s ‘AI Standards Forum for UK Regulators’.

In late 2023, Ofqual’s introduced an innovation service aimed at supporting innovations made by awarding organisations. This detailed interactions with regulatory requirements and identified emerging regulatory risks. The service remains open to the use of AI where it promotes valid and efficient assessment.

Managing Malpractice Risks

As part of Ofqual’s annual Statement of Compliance, Ofqual has required awarding organisations to assess, address, and provide detailed information on how they are managing AI-related malpractice in their assessments. 

Ofqual has acknowledged that non-exam assessments (for example, coursework) are more susceptible to technology misuse. Nonetheless, initial reporting has suggested that “only modest numbers” of cases involving malpractice have been identified as requiring investigation or sanctions. Accordingly, Ofqual has said that whilst it is taking short term actions to ensure secure and safe delivery, it is considering longer-term interventions. 

Where the nationally set content is defined by other bodies, Ofqual has highlighted that it is for those bodies to determine how AI should form part of their assessments. Such bodies include the Department for Education, the Institute for Apprentices and Technical Education, and other authorities where qualifications are a ‘license to practice’. 

Use of AI for Invigilation and Marking

Additionally, Ofqual reiterated its guidance delivered in 2023 to awarding organisations, in which it stated that AI could not be used as a sole remote invigilator for assessments. 

This aligns with the position that AI does not meet the requirements for a human-based judgement, mirroring the position set in Article 22 of the UK GDPR which restricts solely automated decisions that have a significant effect on individuals. 

This position reflects Ofqual’s precautionary principle. It remains open to review in light of further research and evidence.

Next Steps and Implications

The key takeaways from this response indicate that Ofqual’s priorities moving forward are to ensure that AI use in the educational sector remains safe, and that qualifications are not negatively impacted or undermined by improper use of AI. It intends to develop this as a collective understanding amongst awarding organisations. 

Ofqual is now seeking assurances from awarding organisations as to how they may mitigate negative impacts resulting from the inappropriate use of AI. It also emphasises that it remains committed to evaluating evidence on how awarding organisations handle AI malpractice for the purpose of informing future guidance. 

It is important to note that as it currently stands, use cases of AI being deployed either as a tool for aspects like marking efficiency, or inappropriate use in relation to examination procedures are limited. Ofqual recognises this; therefore, acknowledging that its guidance and approach may change in future as AI use develops. 

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Lucy Pegler, Tom Whittaker or Liz Smith. For the latest updates on AI law, regulation, and governance, see our AI blog at: Burges Salmon blog (burges-salmon.com)

This article was written by Liz Smith and Victoria McCarron.