The G7 has produced a toolkit for AI in the public sector (here). The G7 is an informal forum that brings together Italy, Canada, France, Germany, Japan, the United Kingdom, and the United States of America. The European Union also participates in the Group and is represented at the summits by the President of the European Council and the President of the European Commission.

The toolkit was developed from information collected through a questionnaire for G7 members, as well as existing work by international organisations and initiatives such as the Organisation for Economic Cooperation and Development (OECD), the Global Partnership on Artificial Intelligence (GPAI), as well as the United Nations Educational, Scientific and Cultural Organisation (UNESCO).

According to the toolkit, it is

a comprehensive guide designed to help policymakers and public sector leaders translate principles for safe, secure, and trustworthy Artificial Intelligence (AI) into actionable policies.

Recognising both the opportunities and risks posed by AI, this toolkit provides practical insights, shares good practices for the use of AI in and by the public sector, integrates ethical considerations, and provides an overview of G7 trends. 

It further showcases public sector AI use cases, detailing their benefits, as well as the implementation challenges faced by G7 members, together with the emerging policy responses to guide and coordinate the development, deployment, and use of AI in the public sector. 

The toolkit finally highlights key stages and factors characterising the journey of public sector AI solutions.

The report recognises that developing AI strategies for the public sector is a relatively new phenomenon. G7 countries are at differing stages.  

Key messages from the report include:

  • Establish clear strategic objectives and action plans in line with expected benefits
  • Include the voices of users in shaping strategies and implementation
  • Overcome siloed structures in government for effective governance
  • Establish robust frameworks for the responsible use of AI
  • Improve scalability and replicability of successful AI initiatives
  • Enable a more systematic use of AI in and by the public sector
  • Adopt an incremental and experimental approach to the deployment and use of AI in and by the public sector

Common key enablers identified are:

  • talent and skills
  • procurement and partnerships
  • human centric AI - related to the question: “Does the [national] strategy emphasise ethical, trustworthy, and human centric development, deployment and use of AI in the public sector?
  • data
  • supporting infrastructure
  • innovation
  • funding
  • governance

Whilst G7 countries are at different stages of exploring AI in the public sector, common themes emerge. These include the phased approaches being taken: framing the problem; ideating; prototyping; piloting; and scaling up.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid VarneyMartin Cook or any other member in our Technology team.

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.