On the 31 August 2023, the House of Commons Science, Innovation and Technology Select Committee published an interim report on its inquiry into the Governance of Artificial Intelligence (AI) (which we wrote about previously here). Burges Salmon is actively engaging with government consultations on AI regulations (you can read our response to the inquiry here and also our response to the UK's White Paper on regulating AI here). The government response is expected by 31 October 2023.  

Below, we summarise the report’s findings on:

  • the key issues identified with AI governance – the report highlights 12 key challenges the government need to address in relation to AI including: Bias, Privacy, Misrepresentation, Access to Data and Compute, Black Box, Open-Source, IP, Liability, Employment, the International Aspect and Existentialism; and
  • the UK’s approach to AI governance – the report recommends the introduction of legislation rather than the previous approach of relying on existing regulators to fill the regulators gaps.

(The report is also of wider-ranging interest, including a summary of the AI legislative landscape, explanations of the nature of AI, and the potential benefits and risks in specific sectors which we don’t cover here).

The overriding message of the report is that the authors “urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed.” This message is reaffirmed in each section and is the conclusion of the report.

AI governance - key issues

The report recommends that the UK Government’s approach to AI governance and regulation should address each of the twelve challenges outlined, both through domestic policy and international engagement. However, these are issues internationally; the report notes that “The twelve Challenges of AI Governance which we have set out must be addressed by policymakers in all jurisdictions. Different administrations may choose different ways to do this” (emphasis added)

The report highlights twelve challenges that the UK Government and regulators must address, including:

  • The Bias challenge. AI can introduce or perpetuate societal biases via the data it relies upon, such as correlating female names to typically female roles.
  • The Privacy challenge. AI can allow individuals to be identified and personal information about them to be used in unexpected ways that may breach their fundamental rights.
  • The Misrepresentation challenge. AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions, or character. The use of image and voice recordings of individuals can lead to highly plausible material being generated which can purport to show an individual saying things that have no basis in fact.
  • The Access to Data challenge. The most powerful AI needs very large datasets, which are held by few organisations which causes competition and market concerns.
  • The Access to Compute challenge. The development of powerful AI requires significant compute power, which is costly, hence access is often limited to a few organisations. The UK Government has announced plans to establish an Exascale supercomputer facility and an AI-dedicated compute resource to support research.
  • The Black Box challenge. Some AI models and tools cannot explain why they produce a particular result, which is a challenge to appropriate transparency. The challenge is further complicated by the fact that the better an AI model or tool performs, the less explainable it is likely to be.
  • The Open-Source challenge. Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.  
  • The Intellectual Property and Copyright Challenge. Some AI models and tools make use of other people’s content. Policy must establish the rights of the originators of this content, and these rights must be enforced. Ongoing legal cases are likely to set precedents in this area and a draft code is being produced by the Intellectual Property office.
  • The Liability challenge. If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
  • The Employment challenge. AI will disrupt the jobs that people do and that are available. Policy makers must anticipate and manage the disruption.
  • The International Coordination challenge. AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking. The Government believes that the November International AI summit is a first step in doing this.
  • The Existential challenge. Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.

Potential AI laws

The UK Government published the AI White Paper in March 2023 (see our article on White Paper and flowchart for navigating the White Paper) which outlined a “pro-innovation approach to AI regulation”. In parallel, the EU is pushing through its EU AI Act which is expected to be enacted late 2023 / early 2024.

The report states that “a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer”. 

In the context of the wider AI regulation discussion, this session of Parliament is the last opportunity before the next UK General Election for the UK to legislate in respect of AI. Post Election, any Legislation is likely not to be enacted until late 2025, which is two years from now and will be three years after the publication of the AI white paper, which some comment is already falling “out-of-date”.

The report notes that a balance must be struck as although, “some observers have called for the development of certain types of AI models and tools to be paused, allowing global regulatory and governance frameworks to catch up… it should also be remembered that is not unknown for those who have secured an advantageous position to seek to defend it against market insurgents through regulation.” Preserving competition and innovation in this market remains an objective of the UK approach. 

The report argues that waiting on AI legislation for at least 2 years risks the UK “being left behind by other legislation—like the EU AI Act—that could become the de facto standard and be hard to displace” even where the UK can offer a more desirable approach. The situation could mirror the GDPR, where UK laws followed the EU lead.

The BBC consulted the Government on its views on the report. The Government did not confirm if it agreed that a new law should be put forward. A spokesperson highlighted the November International AI summit and the £100m initial investment in a task force to encourage the safe development of AI models (which is, according to the Government, “more funding dedicated to AI safety than any other government in the world”).

Next Steps

The report does not outline the next steps for the Inquiry or whether a final report will follow this interim one. Organisations should keep a keen eye on the government response to the report (and to the White Paper) and also the King’s Speech to see whether the Committee’s recommendation is taken on board by the Government.

Burges Salmon regularly engages with government on AI regulation in the UK. For a copy of our response to the Committee’s review, click here.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, David Varney, Brian Wong or any other member in our Technology team. .

Written by Abbie McGregor