The UK Central Digital and Data Office (CDDO, part of the Cabinet Office) has published guidance for those working within UK government and public sector organisations on how to use generative artificial intelligence (“AI”) safely and securely.

The framework provides useful primers to help the public sector understand generative AI, potential applications and limitations.  We don't cover that here.  Instead, we summarise: 

  1. the ten common principles ‘to guide the safe, responsible and effective use of generative AI in government organisations’, building upon the principles in the UK's White Paper on AI regulation; 
  2. the guidance on building generative AI solutions; and 
  3. the guidance on using generative AI safely and responsibly.

A couple of points to note:

  • The focus of the framework, for now, is on Large Language Models (otherwise known as “LLMs”), as it is recognised that these applications have the “greatest level of immediate application in government”. For example, LLMs can be used to prepare first drafts of standard emails, support reviews of and summarise large amounts of information, and translate documents. The benefits of such applications are likely to be increased efficiencies in terms of resource allocation, time, and costs in the public sector.
  • The framework is recognised to be incomplete and dynamic; generative AI is developing rapidly and best practice ‘in many has not yet emerged’, and the CDDO intends to update the framework regularly.  The framework is not intended to be a detailed technical manual; there are other resources for that. 

1. Principles of generative AI in government organisations.

Ten common principles to guide responsible use of generative AI in the public sector are defined and are:

Principle 1: You know what generative AI is and what its limitations are. 

See the framework's guidance on understanding generative AI.

Principle 2: You use generative AI lawfully, ethically and responsibly

Engage with compliance professionals early in the journey, seeking legal advice where needed, establishing from the start how ethical concerns will be addressed, bias mitigated, personal data protected.  The White Paper's fairness principle should be used, which states that ‘AI systems should not undermine the legal rights of individuals and organisations’.

Principle 3: You know how to keep generative AI tools secure. 

Generative AI systems can be trained on, consume and store sensitive government information and personal data.  That data needs to be kept secure and you must understand where that data is and goes.

Principle 4: You have meaningful human control at the right stage

There should be processes for quality assurance controls which include an appropriately trained and qualified person to review outputs and validate all decision making that those outputs feed into.

Principle 5: You understand how to manage the full generative AI lifecycle

Amongst other things, see other government resources, including the Technology code of practice to build a clear understanding of technology deployment lifecycles, and the National Cyber Security Centre cloud security principles.

Principle 6: You use the right tool for the job

See the framework's section on identifying use cases, picking tools and what to consider when evaluating LLMs.

Principle 7: You are open and collaborative

Consider your stakeholders. Identify who within and outside of government may play a role in your project. 

The framework states government should be open with the public about where and how algorithms and AI systems are being used in official duties . The UK Algorithmic Transparency Recording Standard (ATRS) provides a standardised way to document information about the algorithmic tools being used in the public sector with the aim to make this information clearly accessible to the public.

Principle 8: You work with commercial colleagues from the start

See the framework's buying generative AI section.

Principle 9: You have the skills and expertise that you need to build and use generative AI

See the framework's acquiring skills section.

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place

See the framework's governance section.

2. Building generative AI solutions

Using generative AI is a means to an end, not an objective in itself.  The goal needs to be defined.  Business and user needs should be identified, rather than direction being set by the technology. The government's service manual helps make sure the right problems are being solved.

The framework sets out a non-exhaustive list of high risk use cases that should be avoided. These include, for example, implementing fully automated decision-making (especially for significant decisions, such as those concerning health and safety), and utilisation where high degrees of accuracy and/or justification are required, as the guidance recognises that the information outputs from generative AI may not be accurate and can be opaque, which is insufficient where rationale for decisions need to be clearly explained. 

The framework also sets out an overview of buying generative AI.  This includes the various existing guidance, including: government guidelines for AI procurement; the Digital, Data and Technology Playbook; the Sourcing Playbook; and the Rose Book.  It also sets out points to consider when procuring generative AI, including specifying requirements, running procurement, and aligning procurement with ethical considerations.  In particular, it provides a non-exhaustive list of current and emerging laws that will need to be considered, including data protection, the UK's White Paper on AI regulation, and the Online Safety Act. 

3. Using generative AI safely and responsibly

The framework is clear: ‘although generative AI is new, many of the legal issues that surround it are not. For example, many of the ethical principles discussed in this document, such as fairness, discrimination, transparency and bias, have sound foundations in public law. In that way, many of the ethical issues that your team identifies will also be legal issues, and your lawyers will be able to help to guide you through them.'

In particular, the framework provides detail of example legal issues, including: data protection; contractual issues; intellectual property and copyright; equalities issues, including under the Equality Act 2010; public law issues, such as procedural fairness; and Human Rights. 

The framework moves on to explain that key ethical themes that should be addressed include:

  1. Transparency and explainability
    1. consider what you are transparent about: technical transparency; process transparency; outcome-based transparency and explainbility;
    2. consider also how and to whom you are being transparent: internal transparency; public transparency;
    3. further, consider what existing standards can be drawn upon, such as the Algorithmic Transparency Recording Standard.
  2. Accountability and responsibility 
    1. to establish accountable practices across the AI lifecycle, consider three elements: answerability; auditability; and liability;
    2. ultimately, responsibility for any output or decision made or supported by an AI system rests with the public organisation. Where generative AI is bought commercially, vendors should understand their responsibilities and liabilities, put the required risk mitigations in place and share relevant information.
  3. Fairness, bias and discrimination 
    1. 'Fairness, in the context of generative AI, means ensuring that outputs are unprejudiced, and do not amplify existing social, demographic, or cultural disparities.'
  4. Information quality and misinformation 
    1. consider how to optimise prompts to improve the quality of output, how to verify and cross-reference information produced against trusted sources to ensure accuracy, and how to oversee and regularly review generative AI system performance.
  5. Keeping a human-in-the-loop
    1. The framework puts the risks succinctly, including: ‘Generative AI also lacks flexibility, human understanding and compassion. While humans are able to take individual circumstances into account on a discretionary basis, AI systems do not have this capacity’.
    2. Further: 'Keeping a human-in-the-loop means ensuring that there is human involvement and supervision in the operations and outcomes of generative AI systems. In a broader context, humans should be involved with setting up the systems, tuning and testing the model so the decision-making improves, and then actioning the decisions it suggests.'

The publication of the framework comes at a time when increased focus of AI in the public sector is on the political agenda, both domestically and internationally. For example in the UK, the Artificial Intelligence (Regulation) Bill was proposed in the House of Lords (read more here). The framework makes clear that there is a lot of existing material for the public sector to navigate, for which resources like this framework helps.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David VarneyLucy Pegler, Martin Cook or any other member in our Technology team.

This article was written by Laura Tudor and Tom Whittaker.