The Digital and Technology ministers of the G7 countries published a declaration addressing, amongst other things, responsible AI and Global AI governance. This was ahead of the G7 leaders summit 19 to 21 May 2023, and shortly before the EU's leading parliamentary committees vote on the EU AI Act (see Euractiv's article here).  

We highlight in this article the key points from the Digital and Technology ministers' statement and summarise the relevance to the UK's White Paper on AI regulation (see our article on the White Paper).

G7's statement

The G7 Summit is an international forum held annually for the leaders of the G7 member states of France, the United States, the United Kingdom, Germany, Japan, Italy, and Canada (in order of rotating presidency), and the European Union (EU).

The Digital and Technology ministers of the G7 countries declared that the G7:

  • reaffirm their commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies
  • oppose the misuse and abuse of AI to undermine democratic values, suppress freedom of expression, and threaten the enjoyment of human rights.
  • stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, whilst recognising that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.
  • support the development of tools for trustworthy AI through multistakeholder international organisations, and encourage the development and adoption of international technical standards in international standards development organisations (SDOs) through private sector-led multistakeholder processes.
  • is committed to supporting all stakeholders from across sectors, with their participation in SDOs, and to facilitating inclusive engagement with a special emphasis on participation of SMEs, start-ups, academia and wider society. 
  • reassert that AI policies and regulations should be risk-based and forward-looking to preserve an open and enabling environment for AI development and deployment that maximises the benefits of the technology for people and the planet while mitigating its risks.
  • plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation, including foreign information manipulation, and how to responsibly utilise these technologies.

The UK's position 

It is notable that the G7 declaration recognises that G7 members may take different approaches to achieving trustworthy AI. The UK's White Paper on AI regulation shows that the UK will take a different approach to the EU with its proposed AI Act (click here for a flowchart on navigating the EU AI Act). Given AI systems and those involved in the AI lifecycle often operate internationally, the G7 Digital and Technology ministers recognise the need to seek interoperability between varying regulatory frameworks which may otherwise risk diverging. 

The UK is also clear on its desire to ensure global interoperability and international engagement; there is a section on the subject in the White Paper. The UK wants to continue to work closely with international partners to both learn from, and influence, regulatory and non-regulatory developments. It cites numerous examples where it is already doing this, including: being an active member of the Organisation for Economic Co-operation and Development's governance working party; a contributor to and founding member of the Global Partnership on AI; seeking bilateral AI engagement with other nations and jurisdictions such as the EU (and its member states), US, Canada, Singapore, Australia and others.

So there remains a risk that different approaches to regulating AI could result in multiple complex and diverging regulatory frameworks to navigate.  However, the positive message from both the UK (in the White Paper) and the G7 Digital and Technology Ministers (in the declaration) is that this risk is recognised and there is a political will to tackle it, in large part by encouraging active discussions across countries and organisations, and developing international standards.  

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.