The Government's Response (here) has been published to the House of Commons Science, Innovation and Technology Committee May 2024 report on Governance of AI (the Report, here).
The Committee's Report appears to be the culmination of their inquiry which started in October 2022 to ‘examine the impact of AI on different areas of society and the economy; whether and how AI and its different uses should be regulated; and the Government’s AI governance proposal’. Before producing its Report, the Committee received over 100 written submissions (including from Burges Salmon) and took evidence from witnesses. An interim report was published in August 2023 (see our summary here).
The government responded to each of the Report's 62 recommendations covering themes of AI regulation in the UK, the role of regulators, AI in the public and private sectors, the AI Safety Institute, and International work. Here we summarise the key points from the Committee Report and the Government's Response.
AI specific legislation
- Committee: government should be ready to introduce AI-specific legislation and keep the current approach under consideration;
- Government response: government intends to “establish appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” and will consult on the proposals. According to the government response to the UK's AI Opportunities Action Plan, we can expect the consultation in Spring 2025.
Role of regulators
- Committee: welcomes the government's regulatory gap analysis, recommends that it is published alongside suggestions for delivering necessary co-ordination and information sharing
- Government response: Government expects regulators to play a key role, and will continue to work with the Regulatory Innovation Office and provide funding to boost regulators' AI capabilities.
“The Government recognises that, beyond placing requirements on the development of the most powerful artificial intelligence models, there are a broad range of issues associated with AI development and deployment which require regulatory oversight.
… In most cases, we believe that our existing expert regulators are best placed to apply rules to the use of AI in the contexts they know better than anyone else. The Government remains committed to a pro-innovation approach, with existing expert regulators addressing AI risks in their sectors, understanding where and how the product or service may be used. We are committed to ensuring that regulators have the right expertise and resources to make proportionate and informed regulatory decisions about AI in their sectors.”
AI in the public and private sectors
- Committee: welcomes the establishment of i.AI (the incubator for AI, a team of technical experts inside government), focus on AI deployment in the public sector productivity programme. Government should drive safe adoption of AI in the public sector, should confirm the full list of public sector pilots of AI being led or supported by i.AI and how they will be evaluated, and undertake an assessment of the existing civil service workforce's AI capability, and require all public bodies to use the Algorithmic Transparency Recording Standard.
- Government response:
- government has established the Digital Centre of Government within DSIT and will expand the work of the i.AI. Some of i.AI's work is on the i.AI website, but not all is as i.AI incubates a large number of projects (around 50 have been incubated at the time of the government's response).
- Government is working on an update to the Generative AI Framework (see our overview here) alongside independent guidance.
- Government's rollout of the mandatory Algorithmic Transparency Recording Standard (ATRS) - a public register of where the public sector is developing and deploying AI systems - is well underway with central government departments and a priority group of around 85 arm's-length bodies. Government also says they “will also shortly publish on GOV.UK a scope and exemptions policy explicitly setting out the organisations for which use of the ATRS is currently mandatory. As a Data Standards Authority endorsed product, the ATRS remains recommended for use in the broader public sector, and we will continue to explore options for further embedding and enforcing its use”.
AI Safety Institute
- Committee: the government should continue to empower AISI to recruit the talent it needs and provide specific updates about AISI's work, including any AI models it has been unable to access and the reasons.
- Government response: AISI will be put on a statutory footing to strengthen its role leading voluntary collaboration with AI developers and leading international co-ordination of AI Safety. AISI continues to work with multiple AI developers and AI Safety Institutes, but many of the details are commercially sensitive and cannot be published.
International
- Committee: US and EU AI governance developments, such as the EU AI Act, are clear signs of attempts to secure competitive regulatory advantage. “The current Government is therefore right to have encouraged the growth of a strong AI sector in the UK, engaged with leading developers through the AI Safety Institute and future Summits, and participated in international standards fora. This international agenda should be continued by the next Government, and coupled with the swift establishment of a domestic framework that sufficiently addresses the [specific challenges] of AI Governance highlighted in our interim Report.”
- Government response: government will continue close engagement with international partners, including through AI Action Summits. Government will also continue to champion the multi-stakeholder, industry-led standards development process, and standard-setting work of UK AI Standards Hub.
Specific challenges of AI governance
- Committee: the Committee raised various challenges posed to AI governance and related recommendations, including: deployer disclosure of performance and bias testing; privacy; competition; electoral integrity; violence against women and deepfakes; transparency and accuracy of algorithmic decision-making; access to high-quality datasets including public data; improving and expanding the scale of the UK's AI Research Resource; explainability; public procurement; law enforcement; copyright; liability frameworks; and impacts to the labour market.
- Government response: in summary, the government identified various ways parts of government, regulators, the wider public sector and third party organisations are working together and progressing specific actions in relation to those challenges, such as the ATRS and AI Management Essentials consultation. Of note, DSIT has established the Central AI Risk Function, an function to be established further to the UK's White Paper to AI regulation.
The report should be read alongside the UK's AI Opportunities Action Plan and government response (which we wrote about here) which identified 50 actions and their anticipated timing to help Britain become ‘an AI superpower’. A number of those actions are activities referred to in the government's Response to the Report, so the Action Plan provides useful additional context and updated timings.
The Science, Innovation and Technology Select Committee is appointed by the House of Commons to examine the expenditure, administration and policy of the Department for Science, Innovation and Technology, and associated public bodies. It also exists to ensure that Government policies and decision-making across departments are based on solid scientific evidence and advice.
There is no indication from the Report or Response about whether further reports are to be expected. The inquiry's website says that no further evidence is being accepted.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.