The UK government has published an AI playbook. It updates and expands on the Generative AI Framework for HMG (see our summary here).
The playbook will be relevant to all parts of the public sector looking to procure, develop and deploy AI. It will also be of interest to those working with the public sector. The purpose of the playbook is to:
support the public sector in better understanding what AI can and cannot do, and how to mitigate the risks it brings. It will help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, wellbeing, and trust of the public we serve.
We summarise the key points of the playbook here.
What is AI?
The playbook uses the OECD's definition of AI:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
This happens to be very similar to the definition of AI systems in the EU AI Act (difference in bold) (see our flowchart for navigating the EU AI Act here):
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
The playbook includes various examples of current (or potential) use cases for AI across the public sector. It also points to examples identified by the National Audit Office (see our summary here), and the ongoing records published under the Algorithmic Transparency Recording Standard which central government and arms length bodies are required to use (subject to some exceptions, see our summary here).
Principles
The playbook includes 10 principles to “guide the safe, responsible and effective use of artificial intelligence (AI) in government organizations”. These build upon the five principles in the government's White Paper on AI regulation (here). The 10 principles are:
- Principle 1: You know what AI is and what its limitations are
- Principle 2: You use AI lawfully, ethically and responsibly
- Principle 3: You know how to use AI securely
- Principle 4: You have meaningful human control at the right stage
- Principle 5: You understand how to manage the AI life cycle
- Principle 6: You use the right tool for the job
- Principle 7: You are open and collaborative
- Principle 8: You work with commercial colleagues from the start
- Principle 9: You have the skills and expertise needed to implement and use AI
- Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place
The playbook sets out further guidance for each, and on building and buying AI, developing the relevant teams, and key limitations with AI.
Legal issues
The playbook also sets out various legal issues that may arise with the procurement, development and use of AI, such as:
- Data protection
- Contractual issues, such as:
- deal with intellectual property
- ensure the level of transparency needed to help buyers understand their systems
- transfer a project to new or successor suppliers
- assist with the defence against any legal challenge
- procedures for system errors and outages that recognise the potential consequences of performance failures.
- IP, including copyright, such as ownership, rights and liabilities
- Equality
- Public law issues
- Public procurement
- Human Rights
The playbook emphasises the need for ‘strong governance processes’ because of the risks related to lawfulness, security, bias and data. This may be built into existing governance frameworks or implemented as a new governance frameworks, but in any event, should focus on:
- "continuous improvement through the inclusion of new knowledge, methods and technologies
- identifying and working with important stakeholders representing different organisations and interests, including Civil Society Organisations (CSOs) and sector experts. This will help create a balanced view throughout the life cycle of any AI project or initiatives
- planning for the long-term sustainability of AI initiatives, considering scalability, long-term support, maintenance, ongoing stakeholder involvement and future developments."
The playbook is not intended to be comprehensive - it is a ‘launchpad’ that will develop over time. The public sector can expect to see further updates to the playbook, as well as various other government initiatives, such as through the government's AI Opportunities Action Plan (see our summary here, and our summary from a procurement perspective here).
Burges Salmon's AI and public sector teams have expertise in public sector issues concerning AI, having written the practitioner's text on public law and procurement law and AI, and regularly helping public sector organisations on their AI journey.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.
The image with this article is from the posters which accompany the playbook. They are available here.
The publication of the AI Playbook highlights the competence and extraordinary work already being done in the AI space across the public sector. Developed collaboratively, with input from many government departments, public sector institutions, academia, and industry, this guidance reflects our commitment to continuously engaging with and learning from wider civil society. Feryal Clark MP Parliamentary Under-Secretary of State for AI and Digital Government Department for Science, Innovation and Technology (DSIT)