On 24 April 2024, Ofsted published its strategic approach to AI regulation. This responded to the request by the Secretary of State for Science, Innovation and Technology that key regulators outline both their approach to AI and the steps they are taking in line with the White Paper by 30 April 2024. Ofsted's response outlines how Ofsted will ‘use AI responsibly and fairly’. Further Ofsted says it is supportive of use of AI by providers where it improves the care and education of children and learners. In this document, its ‘position on the use of AI by education and social care providers’. We summarise the key points below.
What is AI?
Ofsted refers to it broadly; Ofsted refers to:
‘artificial intelligence’ as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking alone.
How Ofsted will use AI
Ofsted refer to having multiple use cases for AI, including:
- making use of the information it holds; and
- exploring potential for AI to ‘further improve the way we inspect and regulate’.
This will be to help Ofsted work according to the values in its strategy: ‘to put children and learners first while remaining independent, accountable, transparent and evidence-led.’
Specifically, Ofsted says it ‘will follow the 5 principles’ in the UK Government's AI regulation white paper. In Ofsted's words, this includes (but does not appear limited to):
Regulatory principle | Ofsted will… |
---|---|
Safety, security and robustness | Make sure AI solutions are secure and safe for users and that they protect users’ data Continually test our AI solutions to identify and rectify bias and error |
Appropriate transparency and explainability | Be transparent about our use of AI, and make sure we test solutions sufficiently to understand the decisions it makes |
Fairness | Only use AI solutions that are ethically appropriate – in particular, we will fully consider any bias relating to small groups and protected characteristics at the development stage and then monitor it closely and correct it where appropriate |
Accountability and governance | Provide clear guidance and rules for developers and users of AI within Ofsted about their responsibilities |
Contestability and redress | Make sure that staff are empowered to correct and overrule AI suggestions – decisions will be made by the user, not the technology Continue to manage concerns and complaints through our existing complaints procedure |
Providers' use of AI
Ofsted states that it will consider a provider's use of AI by the effect it has on the criteria set out in its existing frameworks, including its inspection framework, registration guidance, and enforcement policies.
However, Ofsted will not ‘directly inspect the quality of AI tools’. Instead:
Leaders, therefore, are responsible for ensuring that the use of AI does not have a detrimental effect on those outcomes, the quality of their provision or decisions they take.
Ofsted set out what it expects from providers in relation to their use of AI
Regulatory principle | Providers are expected to… |
---|---|
Safety, security and robustness | Assure themselves that AI solutions are secure and safe for users and protect users’ data Ensure they can identify and rectify bias or error |
Appropriate transparency and explainability | Be transparent about their use of AI, and make sure they understand the suggestions it makes |
Fairness | Only use AI solutions that are ethically appropriate – in particular, we expect providers to consider bias relating to small groups and protected characteristics before using AI, monitor bias closely and correct problems where appropriate |
Accountability and governance | Ensure that providers and their staff have clear roles and responsibilities in relation to the monitoring, evaluation, maintenance and use of AI |
Contestability and redress | Make sure that staff are empowered to correct and overrule AI suggestions – decisions should be made by the user of AI, not the technology. Allow and respond appropriately to concerns and complaints where AI may have caused error resulting in adverse consequences or unfair treatment. |
Next steps
Ofsted state that they will now:
- continue building their understanding - including on the use of AI by providers and emerging research on its impact on the outcomes and experiences of children and learners;
- develop inspectors' knowledge about AI, so they can consider AI's different users; and
- work with other regulators and central government departments.
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Lucy Pegler, Tom Whittaker or Liz Smith. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com)
This article was drafted by Liz Smith and Victoria McCarron