recent report by the UK’s National Audit Office (NAO) concluded that development and deployment of AI in government bodies is at an early stage and there is activity underway to develop strategies, plans and governance. Here we summarise the key points that those in the public sector procuring, developing and deploying AI systems need to know. 

Use cases

There is no comprehensive list of current use cases of AI in the public sector. However, government is looking to expand the use of the Algorithmic Transparency Register Standards which should shed further light (see our article here).  From various reports there are clearly multiple purposes for which AI systems are being used or considered for use, including:

  • Enhancing internal processes – such as AI that facilitates information retrieval, classification or summarisation;
  • Aiding in operational decisions – for example, AI that predicts service-users at risk of poor outcomes to help target support more effectively;
  • Supporting research and monitoring – such as AI that estimates road traffic volumes from satellite imagery or AI that uses machine-learning to predict the energy efficiency of properties; and
  • Direct public interaction and service delivery – such as the use of an AI chatbot.

Each has a different legal, commercial and operational risk profile.

There is no single definition for AI used within the public sector of which we know, so public sector bodies need to consider whether and to what extent they define AI to bring clarity and consistency to their understanding and reporting.

Challenges and considerations

The Autumn Statement 2023 highlighted that AI has the potential to provide billions of pounds of productivity benefits to the public sector. To support AI adoption, the Spring Budget 2024 announced funding would be made available for various AI-related projects under the Public Sector Productivity Programme (read more here, and here). However, the public sector will need to consider challenges and considerations when procuring, developing, and deploying AI systems. For example:

  • Procurement of AI systems – Public sector purchasers must consider public procurement law when purchasing goods, including AI systems. Public procurement of AI systems raises various challenges, for example, specifying with clarity requirements for a cutting-edge system that is intended to change over time, in parallel with a technology and market that is also evolving quickly.  The Public Sector also needs to consider various guidance, some of which is helpfully listed in the Procurement Policy Note 02/24 (see our article here). 
  • Data access and quality – Access to high-quality and often large amounts of data is crucial for the development and implementation of AI systems. The recent NAO report highlighted that one of the barriers for implementation of AI in the public sector was a lack of access to such data. Additionally, there is a risk that AI systems are trained on data that reflects, amplifies and perpetuates biases, and mitigations need to be in place. 
  • Skills – In order for the public sector to develop and deploy AI systems, there needs to be a focus on recruiting and retaining staff with the necessary skills. The government is seeking to implement initiatives which will allow those already working in the public sector to be upskilled and to build knowledge and capacity. If skills shortages persist, the AI activities that could realistically be achieved will need to be reconsidered. 
  • Transparency and explainability – The House of Lords Select Committee on Artificial Intelligence stated that the use of AI in the public sector should involve mechanisms for transparency and ensuring that the public are aware of when AI is involved in making significant or sensitive decisions and how. The Committee continued that the public sector should consider only deploying AI systems that are able to generate explanations of how and why a decision was made. 
  • Legal liability – A recent research briefing by the Parliamentary Office of Science and Technology highlighted the potential lack of clarity of applicable liability frameworks in the event of AI causing harm. This lack of understanding and clarity on legal liability for AI use cases was similarly highlighted as a concern in the NAO report.
  • The Nolan Principles – A review by the Committee on Standards in Public Life (CSPL) concluded that the use of AI in the public sector could pose a threat to some of the Nolan Principles, particularly openness, accountability and objectivity. For example:
    • Openness requires holders of public office to act and take decisions in an open and transparent manner, and to only withhold information from the public if there are clear and lawful reasons for doing so. The CSPL suggested that the government are currently not upholding this principle in relation to information on the use of AI. Whilst the government have introduced the Algorithmic Transparency Recording Standard since this review to assist with providing clear information to the public about the tools that are being implemented by public sector organisations, according to the recent report by the NAO, this standard has not been widely adopted yet.  The government response to the White Paper stated that the government intends to extend and make mandatory the ATRS throughout government during 2024.
      • Accountability requires holders of public office to be accountable for their decisions and actions and to submit themselves to the scrutiny necessary to ensure this. CSPL stated that AI could blur accountability, undermine responsibility attribution for key decisions, and prevent the ability to provide meaningful explanations for AI-driven decisions.
        • Objectivity requires holders of public office to act and take decisions impartially, fairly and on merit, using the best evidence and without discrimination or bias. CSPL highlighted the potential for AI to provide discriminatory results if flawed data is being fed into the AI system.
  • CSPL recommends that senior leaders within the public sector should assess any potential risks that AI systems presents at project design stage and ensure that the responsibility for AI systems is distinctly assigned and documented. Public sector providers should also inform the public about their rights and the procedure to appeal against decisions made by automated and AI-assisted systems.
  • Equality and Human Rights Commission / Equality Act 2010 – Guidance by the Equality and Human Commission (ECHR) highlighted the need to consider the Public Sector Equality Duty (PSED), established by the Equality Act 2010

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid VarneyMartin Cook or any other member in our Technology team.

Written by Emma Everett, Laura Tudor, and Tom Whittaker.