The UK's National Audit Office (NAO) has produced a report on ‘how effectively the government has set itself up to maximise the opportunities and mitigate the risks of AI in providing public services’.  NAO concludes that development and deployment of AI in government bodies is at an early stage and there is activity underway to develop strategies, plans and governance. 

The report provides detail resulting from speaking with various government departments covering the UK's strategy and governance for AI in the public sector, the use of AI in the public sector, and support for scaling and adopting AI in the public sector.  Here we summarise the key points.

Key findings - in numbers

  • 74 - number of AI use cases already deployed reported by government bodies 
  • 37% - proportion of government bodies responding that had deployed AI 
  • 37% - proportion of government bodies responding that had not deployed AI but were actively piloting or planning AI 
  • June 2024 target by which central government departments are expected to have costed and reviewed AI adoption plans in place 
  • 21% - proportion of government bodies responding that had a strategy for AI in their organisation, while a further 61% had plans to develop one

Key findings

  • The government lacked a coherent plan to support adoption of AI in the public sector as part of its 2021 National AI Strategy
  • DSIT and the Cabinet Office have responsibility for AI. The draft strategy for AI adoption in the public sector does not set out which of these departments has overall ownership and accountability for its delivery.
  • There is limited integration of governance arrangements for AI adoption in the public sector and those for wider AI policy for the UK.
  • Departments are at an early stage in developing their own AI strategies and supporting governance arrangements.
  • As at autumn 2023, AI was not widely used across government, but government bodies are exploring opportunities.
  • Updating legacy systems and improving data quality and access is fundamental to exploiting AI opportunities but will take time to implement.
  • Assurance of AI within government bodies is variable and still developing.
  • Departments identified a lack of AI skills as a key barrier to adoption of AI in government 

What types of AI?

The NAO focussed on uses of machine learning for tasks - including language processing, predictive analytics and image or voice recognition - which were deployed, piloted or planned. The NAO excluded simple rules-based automation and use of AI embedded in pre-existing tools provided by default (for example, automatic email spam filters or email smart replies), as well as individuals’ ad-hoc use of publicly available AI.

What is government doing with AI?

According to the report:

  • Just over a third (37%) of the 87 government bodies that responded have deployed AI, with typically one or two use cases in each. 
  • Over two-thirds (70%) are piloting or planning AI, with a median of four use cases being explored per body. 
  • The most common purposes of deployed AI are to support operational decision-making or improve internal processes. 
  • Across government bodies NAO found common themes in the types of AI that are currently being piloted or planned. This suggests that there is scope for sharing knowledge and working together on common forms of functionality, for example, AI use cases that support common business processes. Examples from the survey include use of AI to analyse digital images to extract information from documents or to identify and classify objects, use of natural language processing to summarise or draft text, and use of AI to assess trends and patterns and monitor live data
  • NAO considered that between 35 and 45 of the 87 survey respondents were piloting or planning generative AI use cases.

How can government make transformation programmes a success?

NAO's previous work identified lessons for government to get right at the outset if large scale transformation programmes are to be successful.  These include 

  • Understanding the business need: The government must identify and understand the business need, before it determines the best solution for the problem. Without careful consideration at the outset of the complexities and interdependencies involved, the risk of programme failure increases. [NAO's] case studies reiterated the importance of assessing the business need before determining what solution (including what AI technology) might be needed. 
  • Clear accountabilities and senior leadership: Clear accountability structures are needed to ensure senior leaders can be held to account for delivery. In cross-government programmes, like AI adoption in the public sector, appointing a lead department to oversee delivery is important and senior sponsorship and strong leadership is also necessary. The draft strategy for AI adoption in the public sector does not set out a lead department with overall accountability. 
  • Identifying desired outcomes and performance measures: It is important to have clarity on the outcomes the programme is aiming to achieve, including the benefits it expects to realise. Key performance indicators should be tracked, including establishing baseline measures at the outset against which to assess progress. These have not yet been put in place for the strategy for AI adoption in the public sector. 
  • Assessing workforce impacts: Realising the benefits of large-scale adoption of AI will require changes in the roles of civil servants. The implications for the overall composition of the workforce and the skills required are not yet considered in detail in the strategy for AI adoption in the public sector.
  • Addressing legacy systems and data: The government relies on legacy systems (with associated data quality and consistency issues) for many important services. We heard from case studies the importance of considering the dependencies between AI adoption plans and wider digital transformation programmes to ensure plans are feasible and build on existing modernisation programmes. 
  • Ensuring the right mix of capability: Successful implementation of AI programmes is dependent on having the right skills in place. Case studies noted that access to analytical skills was important to understand opportunities and to design and engineer AI use cases. Digital and technology capacity was also needed to implement AI solutions. A case study with experience of trialling AI also noted the importance of capacity within operational teams to trial AI use cases and support adoption.

What are the risks facing government through use of AI?

Government bodies identified to NAO a range of AI risks as barriers to implementing AI in their organisations. 

These included legal risks, such as lack of understanding or clarity on legal liability (67% of the 87 responding bodies); risks of inaccurate outputs (for example, due to bias, discrimination or disinformation (57%); and security risks, including risks to privacy, data protection and cyber security breaches (56%)

Support from the centre of government for artificial intelligence (AI) adoption was viewed as important.  In particular, for sharing knowledge and learning on AI use across government and fostering understanding of legal considerations including legal liability.  Those were above improvements to skills, funding, and data access.

How is the Algorithmic Transparency Recording Standard being used?

The intention of the ATRS is to ‘help public sector organisations provide clear information about the algorithmic tools they use, and why they’re using them.

In the government's AI white paper consultation response published in February 2024, government announced that use of the ATRS will become a requirement for all central government departments, with an intent to extend this to the broader public sector over time.

Currently its use is mixed.  Respondents said they complied with the ATRS as follows (87 government bodies responding): 13% said they always complied; 13% usually; 34% sometimes; 3% rarely; and 38% never.

Notably, the current ATRS repository has 7 entries published. That is in contrast to the 74 use cases already deployed reported by government bodies.

Consequently, whilst the NAO report provides a lot of detail (much of which is not summarised here), it appears that there is more detail about public sector use of AI which is not currently publicly available, either easily (e.g. through ARTS) or potentially at all.

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Tom Whittaker, David Varney, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (