In April 2024 the Office for Nuclear Regulation (ONR) published a policy paper outlining its pro-innovation approach to regulating artificial intelligence (AI) in the nuclear sector.

The paper was published in response to an open letter from the Secretary of State for Science, Innovation and Technology and the Secretary of State for Work and Pensions which asked the ONR and other regulators to publish updates detailing how their regulatory approaches align with the UK Government’s 2023 AI Regulation White Paper. Please see our previous post for more information on the UK’s approach to regulating AI as set out in the White Paper. 

In the UK, regulators are required to govern the development and use of AI within their specific sectors, without a dedicated AI regulator or overarching AI legislation. This differs to the EU’s approach with the EU AI Act forming the legal framework for the regulation of AI at a cross-sector level. 

In its paper, the ONR recognises that AI has the potential to improve the safety of current and future nuclear plants and facilities and welcomes the growing appetite of dutyholders to apply AI systems and technologies in ways that could improve safety and reduce hazards. 

The ONR also acknowledges its role in supporting and advancing innovation across the nuclear sector and the importance of actively pursuing a collaborative and open relationship with key stakeholders to enable the safe deployment of new technologies, including AI. 

Alignment of the ONR’s Approach with the UK Government’s Principles

Based on five key principles, the UK Government’s White Paper sets out the framework for regulators to interpret and apply to AI within their sectors.

Noting that the ONR consider that their  current programme of work is already well aligned with these principles, the below table summarises the existing measures currently adopted by the ONR and how they align with the five principles. 

White Paper principlesONR’s comments
Safety, security and robustness

A core function of the ONR as a regulator is to protect the public. As a result, alignment with this principle is crucial and the ONR must ensure that dutyholders meet and adhere to the applicable safety and security standards.

The ONR’s regulatory framework is currently “outcome focused and technology-neutral and responsibility lies with the dutyholder to transparently explain how any regulated system operates in a safe and secure way”. Dutyholders are required to meet the requisite safety and security standards, irrespective of the type of technology used.

To ensure these standards are met, the ONR’s assessments, inspections and permissioning regime requires dutyholders to continuously identify, assess and manage safety and security risks. The ONR is entitled to take enforcement action where these safety and security standards are not met. 

The ONR engages with a variety of internal and external stakeholders to promote the safe use of AI in the nuclear sector. For example, the ONR has partnered with the Environment Agency to pilot a groundbreaking sandboxing exercise to explore the use of AI in the nuclear sector.

Appropriate transparency and explainability

The ONR states that “the level of transparency and explainability required for any system or technology is proportionate to the significance of the safety or security claim made against it”. As a result, it is paramount that dutyholders understand and are able to explain the potential impact on safety and security caused by any failure to deploy AI. 

It is anticipated by the ONR that existing concepts already utilised by dutyholders in ensuring nuclear safety for non-AI technologies will be applied to the deployment of AI systems.

Fairness

The ONR does not anticipate that the use of AI systems in delivering nuclear safety or security functions within the civil nuclear sector poses a risk of breach of this principle. However, the ONR notes that this will be kept under review and reassessed in the future if required.

 

Accountability and governance

The ONR has clear regulatory expectations and requires dutyholders to adopt proportionate accountability and governance arrangements to ensure AI systems remain safe and secure throughout their life cycle. 

By conducting regular inspections and through the implementation of its permissioning regime, the ONR is able to assess whether the accountability and governance arrangements of dutyholders and the supply chain are compliant. The ONR is entitled to take enforcement actions where arrangements are deemed to be non-compliant.  

Contestability and redress

The ONR adopts a range of methods to promote contestability and redress in relation to the application of AI within the nuclear sector. 

Examples include: 

  • Encouraging industry professionals and third parties to raise concerns. 
  • Operating a whistleblowing process in compliance with UK whistleblowing legislation.
  • Operating a process for complaints relating to the ONR or its services in compliance with the Regulator’s Code. 
  • Allowing  victims to apply for a review of an ONR decision to prosecute via the ONR’s Victim’s Right to Review.

Moving Forward

Whilst the ONR considers that its current programme of work is already well-aligned to the applicable AI principles set out in the White Paper, the ONR recognises the importance of remaining adaptable and responsive to the fast-changing landscape and the needs of its stakeholders. 

The ONR aims to continue its programme of targeted stakeholder engagement to support the safe deployment of AI systems, whilst also improving consistency of approach and reducing regulatory uncertainty. 

As part of its pro-innovation strategy, the ONR intends to adopt a more targeted and effective approach by:

  • Working with dutyholders to update existing guidance on Safety Assessment Principles (SAPs) and Security Assessment Principles (SyAPs) to address emerging and maturing uses of AI. 
  • Continuing to engage with industry, academia and regulatory organisations to increase consistency and minimise regulatory uncertainty.
  • Increasing the number of sandbox exercises to test AI technologies in a safe environment.
  • Building its internal capabilities in relation to AI by developing its AI-focused team of specialist safety and security inspectors.

It is expected that the ONR will shortly publish its 2024 Corporate Plan which will outline how it intends to meet its AI regulatory objectives looking forward. 

Please contact a member of our nuclear team for further information on the regulation of AI in the nuclear sector.  If you would like to speak with our technology team, please contact Tom Whittaker, Brian WongLucy PeglerDavid VarneyMartin Cook .

This article was written by Ian Truman, Laura Callard and Jacob Hall.