The Ada Lovelace Institute has recently published a policy briefing setting out key considerations for the deployment of foundation models in the public sector (the "Briefing").

The Briefing identifies optimism amongst key stakeholders in relation to the potential for foundation models to enhance public services in the context of budgetary restraints and growing user needs. Proposed public sector use cases include automating document analysis, catching errors and biases in policy drafts and improving customer service through chatbots.

The key findings identified in the Briefing are:

  • Inconsistent public sector usage: Currently, authorised use of foundation models in the public sector is limited to demonstrations, prototypes and proofs of concept. Despite this, there is evidence of informal use by individual civil servants outside of these parameters.
  • Effective use requires meaningful evaluation of alternatives: Public-sector organisations must carefully evaluate mature and tested alternatives to foundation models to ensure foundation models are the most suitable and cost-effective solution for the problem, guided by the Nolan Principles of Public Life, which include accountability and openness.
  • Procurement challenges: Procuring foundation models for the public sector will be challenging, with risks associated with over-reliance on applications developed for the private sector that may not align with the needs of the public sector.
  • Risks: When developing, procuring or implementing foundation models, public-sector organisations should consider potential risks, including biases, privacy breaches, misinformation, security threats, overreliance, workforce harms and unequal access.

The Briefing also highlights the need for improved governance of foundation models in the public sector to ensure the delivery of value for money for taxpayers and to mitigate the risks and potential harms identified above. The Briefing provides a series of recommendations for policymakers which includes:

  • Regularly reviewing and updating guidance: Regularly review and update relevant guidance on the interpretation of the UK regulatory regime to adapt to evolving AI technology.
  • Procurement: Set mandatory procurement requirements that ensure foundation models uphold public standards and align with ethical standards.
  • Data residency: Ensure sensitive public data is held in the UK where required in order to comply with applicable legal requirements.
  • Third-Party Audits: Mandate independent third-party audits for all foundation models used in the public sector whether developed in house or externally.
  • Public Engagement: Involve the public in governance decisions through methods like citizens' assemblies and participatory impact assessments.
  • Piloting and Training: Pilot new use cases to identify risks and provide relevant technical training for public sector employees working with foundation models to ensure appropriate risk management.
  • Ongoing Monitoring: Continuously monitor foundation model applications throughout the full application lifecycle (including inception, development and operation in a live environment), incorporating assessment of real-life data and feedback.
  • Transparency Standard: Implement the Algorithmic Transparency Recording Standard to provide clear information about the use of AI tools in the public sector.

There are options as to how the public sector can procure AI. However, each has risks. Engaging external providers risks over-reliance on private-sector providers, including a potential lack of alignment between applications developed for a wider range of private-sector clients and the needs of the public sector. In particular, public-sector clients:

  • are more likely to deal with highly sensitive data
  • have higher standards of robustness
  • require higher levels of transparency and explainability in important decisions around welfare, healthcare, education and other public services.

Conversely, the report considers that 'developing bespoke public foundation models to replicate or compete with foundation models such as GPT-4 is unlikely to unlock significant public value at proportionate cost. ... It could therefore be beneficial for public-sector users to act as ‘fast followers’, adopting established technologies and practices once they have been tried and tested, rather than always trying to use or develop the latest options.'

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler, Martin Cook or any other member in our Technology team.

This article was written by Alex Fallon.