Ofgem is seeking views on its draft guidance about how artificial intelligence (AI) should be used safely, securely, fairly and sustainably in Great Britain’s (England, Scotland and Wales) energy sector to encourage more innovation.
Ofgem explains that:
This guidance builds on our call for input on the use of AI in the energy sector, and our high-level strategic approach to AI, both published in April 2024. It sets out good practice for all stakeholders considering procurement or deployment of AI in the energy sector. Our guidance aims to streamline regulatory burdens on stakeholders, by acting as a roadmap to the existing regulatory framework.
See our summary of Ofgem's strategic approach to AI here.
The key points are:
- Purpose - The purpose of this draft guidance is to encourage an ethical approach to AI adoption in the energy sector. “This guidance sets out good practice for stakeholders to consider when evaluating opportunities to engage with AI. These good practice expectations are intended to supplement and support the existing regulatory regime applying to the energy sector.” Ofgem's current view is that the regulatory framework is adequate to govern the use of AI, although will keep this under review as AI usage in the sector develops.
- Approach - Ofgem's approach to AI consists of four principles: safety, security, fairness and sustainability. The overall approach is stated to be pro-innovation, in line with Ofgem's Growth Duty.
- Audience - the guidance is aimed at “all stakeholders involved with AI in the sector which includes, but is not limited to, licensees, market participants, operators of essential services, dutyholders, technology companies, AI developers, consumer groups, other regulators and government.”
The guidance sets out good practices, including the following, as well as some sector-specific examples:
Governance and policies
- Practice 1: clear strategy, with articulation of outcomes and associated risks
- Practice 2: effective accountability and governance
- Practice 3: clear guidelines and policies
- Practice 4: clear role expectations across the organisation
Risk
- Practice 1: have a clear strategy
- Practice 2: risk assessment and management
- Practice 3: adopt good practice in specification and development
- Practice 4: understand the characteristics of the AI component within the broader system
- Practice 5: identify and address potential failure modes that could impact safety, security or fairness
- Practice 6: develop confidence in the performance of the AI component within the broader system
- Practice 7: access to competent persons
- Practice 8: human and AI interaction
- Practice 9: monitor and review
Competencies
- Practice 1: robust training plans
- Practice 2: suitably qualified decision makers and staff
- Practice 3: knowledge management policies and procedures
- Practice 4: horizon scanning.
Ofgem has also established an AI regulatory lab (Reg Lab) in which novel use cases can be submitted, regulatory risks discussed, and draft guidance produced. However, the “AI Reg Lab will not be able to provide feedback on regulatory compliance of submissions but will only consider the regulatory impacts”.
The guidance consultation remains open until 7 February 2024 here.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.