2024 saw significant and multiple developments in AI law, regulation and policy in the UK, EU and globally. This article recaps just some of the key AI law and regulation highlights of 2024 and what to look forward to in 2025.
2024 Highlights
EU
The EU AI Act came into force on 1 August 2024. The AI Act regulates the use of AI within the EU, as well as those outside the EU whose systems affect EU citizens or the EU market (although, other legislation is also relevant to AI, see the EU's digital strategy here). Transition periods have begun which include from 2 February 2025 AI literacy requirements (Article 4) and prohibitions on specific AI systems (Article 5) (see our AI Act flowchart here).
The EU Commission announced that over 100 organisations have signed the AI Pact, a series of voluntary commitments to start applying the principles of the AI Act ahead of its application and to enhance engagement between the EU AI Office and all relevant stakeholders, including industry, civil society and academia (here). Further, the EU is consulting on codes of conduct which will help stakeholders understand how to implement the AI Act in practice.
UK
It was announced in the King's Speech (July) that government will ‘seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.’ We continue to await proposed regulation which now appears to be set for 2025 (see this FT article).
There have also been proposals in the House of Lords for AI regulation:
- Proposals for AI Proposed the Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]. The Bill is currently at the second stage in the House of Lords. If enacted, the Bill would create new regulations for the use and procurement of automated and algorithmic tools in decision-making processes within the public sector (here;)
- The Artificial Intelligence (Regulation) Bill is currently at its third reading in the House of Lords, with various issues, risks and, most significantly, praise being voiced. If enacted, the Bill would create a new UK Artificial Intelligence “AI” regulator, creation of Chief AI Officers, and mandate the introduction of further AI regulation (we picked out the key parts of the AI Bill here). The next stage is the Committee Stage for a line by line examination of the bill, which is yet to be scheduled. We picked out a few select points from the second reading in the House of Lords (here).
In February 2024, the UK Government's response to the AI White Paper consultation outlined a pro-innovation regulatory approach based on cross-sectoral principles, a context-specific framework, international collaboration, and voluntary measures for developers (here).
In response to the White Paper, regulators published their strategic review updates:
- Information Commissioner's Office (ICO),
- Bank of England and Prudential Regulation Authority,
- Financial Conduct Authority (FCA),
- MHRA;
- Ofsted,
- Ofgem,
- CMA,
- Ofcom,
- Equality and Human Rights Commission (EHRC),
- Health and Safety Executive (HSE),
- Legal Services Board,
- Office for Qualifications and Examinations Regulation (Ofqual).
As part of those strategic updates, regulators identified various work that they would undertake in 2024, 2025 and beyond. Notable examples of regulator, government and sector activity in 2024 include:
- Aviation - the CAA published its i) response to emerging AI-enabled automation (CAP3064), ii) strategy for regulating AI in the UK aviation industry (CAP3064A here), iii) and strategy how it will use AI (here);
- Defence - the Ministry of Defence (MOD) released JSP 936 Part 1, this policy is aimed at guiding the safe and responsible adoption of Artificial Intelligence (AI) within Defence and outlines the necessary steps and considerations for integrating AI (here);
- Finance - the Bank and FCA published the results of a joint survey into the use of artificial intelligence (AI) and machine learning (ML) in the UK's financial services industry (here). Also, the FCA has opened a questionnaire to understand the current and future uses of AI in the UK financial sector and financial services regulatory framework. The FCA states that it is “keen to gather a wide range of views from different market participants to understand what transformative use cases may develop, and what we can do to support opportunities for beneficial innovation.” The responses will help shape its future regulatory approach (here)'
- Asset management - the Technology Working Group to the HM Treasury's asset management taskforce published their third and final report (the Report) in collaboration with the Investment Association (IA). While the first two reports focused on fund tokenisation, this report turns to AI, exploring how it can be utilised for new opportunities across the investment management sector (here);
- Education - the UK Department for Education published a report on GenAI use cases in education, building on the insights from hackathons, proof of concept developments and user research (here);
- Privacy - the ICO initiated a public consultation series focused on the intersection of generative Artificial Intelligence (AI) and data protection laws (here). The outcomes report was published in December (here);
- Competition - the CMA published papers on foundation models (here)
Further regulatory work was seen as part of the Digital Regulation Co-operation Forum (DRCF) which supports co-operation between the CMA, FCA, ICO and Ofcom on digital regulation. It launched an AI and Digital Hub pilot to “support innovators working on AI or digital products by providing informal advice on complex regulatory questions that cross more than one DRCF regulator’s remit. This free service will make it easier to get support from two or more regulators at once, via the DRCF website, rather than having to approach each one separately” (here). Notably for regulators also, when considering their approaches, regulators will be considering economic growth as a result of growth duty upon them. That duty was extended under the Deregulation Act 2015 with further guidance in 2024 (here).
Also, the UK AI Safety Institute continued to test AI systems and started to write about their experiences (such as here).
Not only is the public sector talking about regulation of AI, it is also looking at using it. However, there remains mixed levels of transparency into public sector use of AI;
- The UK's National Audit Office (here) and Alan Turing Institute (here) reported on growing interest and exploration of generative AI use in the public sector.
- Some of the regulatory strategies (listed above) refer to their use of AI, such as the CAA.
- The UK Algorithmic Transparency Recording Standard (“ATRS”) saw limited further public activity but public bodies have been busy considering what (if anything) requires publication on the ATRS. In February 2024, the AI white paper consultation response announced that use of the ATRS will become a requirement for all central government departments, with an intent to extend this to the broader public sector over time.
Finally, the new UK government has not announced whether it will pick up the previous government's mooted 'Code of Practice' for commercial text and data mining (TDM). Plans for a Code of Practice were announced following the previous government's decision to abandon its plans to allow commercial TDM, which had been met by strong disapproval from the creative industries. The Code of Practice was itself then shelved.
DSIT
In October the UK's Department for Science, Innovation & Technology (DSIT) published its Artificial Intelligence Sector Study which aims to better understand the size, scale, profile and economics of AI activity in the UK (here)
DSIT has also been actively working on AI assurance:
- DSIT published a report on what the AI assurance market in the UK looks like and how government is supporting its growth (here);
- DSIT published a report about 'Accelerating the growth of the UK's AI assurance market' (here);
- DSIT published an Introduction to AI assurance (here);
- the UK government's Responsible Technology Adoption Unit (a directorate within DSIT, and formerly the Centre for Data Ethics and Innovation) has published an updated Portfolio of AI Assurance Techniques. The portfolio offers access to “Explore new use cases showing how real-world examples promote trustworthy AI development. Essential for anyone designing, deploying, or procuring AI” (here);
- The RTAU has also produced a ‘Model for Responsible Innovation’ which is a ‘practical tool … to help teams across the public sector and beyond to innovate responsibly with data and AI.’ (here).
- DSIT published and is seeking views on its AI Management Essentials tool, "a resource that is designed to provide clarity to organisations around practical steps for establishing a baseline of good practice for managing artificial intelligence (AI) systems that they develop and/or use” (here).
There have been developments also specific to the public sector:
- The UK Central Digital and Data Office (CDDO, part of the Cabinet Office) published guidance for those working within UK government and public sector organisations on how to use generative artificial intelligence (“AI”) safely and securely (here);
- Further, the UK's Cabinet Office has published Procurement Policy Note (PPN) 2/24 on Improving Transparency of AI use in government procurement, both in terms of understanding where AI has been used in a bid and how AI is intended to help deliver goods and services procured (here).
Legal sector
2024 saw further developments in guidance issued to lawyers in the UK and globally:
- HM Courts and Tribunals Service in England & Wales has published guidance for judicial office holders regarding AI (here);
- the Bar Standards Board (BSB) produced (non-binding) guidance for barristers which emphasised the informed use of generative AI tools (here);
- UNESCO, as part of its AI and the Rule of Law programme, has published draft guidelines for the use of AI systems in courts and tribunals (here)
International
There is a lot of work going on internationally to discuss AI, in particular, around AI safety and to exchange experiences in specific sectors. The UK government's approach remains to take an active role internationally, so further developments are to be expected in 2025.
- Seoul hosted the second AI Safety Summit (here);
- The UK, US and the EU, amongst others, became signatories to the Council of Europe Framework Convention on Artificial Intelligence (also known as the AI Convention), the “first legally binding international treaty aiming to ensure that AI systems are developed and utilised in ways that respect human rights, democracy and the rule of law” (here);
- The G20 Ministers responsible for Digital Economy emphasised AI's role in promoting inclusive and sustainable development, committing to international cooperation for safe, ethical AI systems;
- In the G7, Minister introduced a Toolkit for AI in the Public Sector, developed with OECD and UNESCO, for ethical AI development, deployment and use of AI in public services.
The US National Institute of Standards and Technology (NIST) published a risk management framework for generative AI (here), including risk sub-categories and mitigations. These are mapped against, and to be read in conjunction with, the NIST Artificial Intelligence Risk Management Framework launched in January 2023 which is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems (here).
The MIT Risk Repository was launched to provide a comprehensive and live database of AI risks to aid awareness, risk assessments, and research and development (here).
AI Case law
Disputes continue around use of AI although are still relatively few and far between, at least from what is publicly available:
- In a test case the UK Supreme Court confirmed that the ‘inventor’ identified in a patent application must be a natural person, not an AI (here). However, the judgment is largely procedural and does not address the wider question of whether inventions created by an AI system can be patented;
- Copyright infringement claims against major Gen AI companies continue to progress around the world, particularly in the US. The only high profile claim in progress in the UK courts is Getty Images' claim against Stability AI, which covers passing off, copyright, trade marks, and database right. Trial is set for June 2025 and a judgment is expected in autumn 2025.
- And there are signs of AI being used in litigation. For example, the First Tier Tribunal in England considered, amongst other things, whether evidence generated by ChatGPT could be used in court about potential keywords to search for documents to imply that the keywords actually used were too narrow. Oakley v Information Commissioner [2024] UKFTT 315 (GRC) (18 April 2024) (here). Also, a New York court warned against using AI in legal proceedings, specifically in expert evidence. Matter of Weber 2024 NY Slip Op 24258, [Surrogate's Court, Saratoga County] (here).
What does the future hold for AI law, regulation and policy in 2025?
There are likely to be continued legal and regulatory challenges to overcome and technological advancements to navigate as we move into 2025. The most notable anticipated developments on our radar are:
- EU AI Act - February 2025 marks the end of the first transition period for AI literacy (article 4) and prohibited systems (article 5). The EU will continue its consultation on the code of conduct for general-purpose AI systems, and we can expect publications of how illustrative AI systems are treated under the Act, such as high-risk AI systems, to help better understand how parts of the Act operate. We should also know the identify of each member states' national competent authority. Guidance on reporting of serious incidents is expected to be published, also.
- UK AI regulation - the UK AI Regulation Bill is expected to be published with a consultation. Expect that debates in the House of Lords over other AI regulation (listed above) will continue. Expect regulators to remain active with further guidance.
- UK government - Government is expected to push ahead with more of its plans under its response to the White Paper, although many may have limited visibility to the public. More submissions to the Algorithmic Transparency Recording Standard are anticipated giving further insight into where AI is being used in the public sector. The government's AI Action Plan is expected to be produced.
- UK Competition - the Digital Markets Competition and Consumers Act 2024 is due to come into force on 1 January 2025). The CMA has confirmed that AI and its deployment by firms will be relevant to its selection of Strategic Market Status (SMS) candidates, particularly where AI is deployed in connection with other more established activities. The CMA will now have the power to impose conduct requirements on companies with SMS status, and also to make pro-competitive interventions to address anti-competitive factors relating to a digital activity.
- UK regulators
- according to regulators strategic updates, we can expect numerous updates from most of them, too numerous to list here;
- FCA stakeholder consultation closes in January;
- DSIT AI Management Essentials consultation closes in January also;
- the next Artificial Intelligence Action Summit will be in France in February.
- In healthcare, MHRA to due to put Pre-Market Statutory Instrument and Future Core Regulations before Parliament in 2025, reforming medical device regulation in UK and including amended definitions of Software and AI as a medical device. MHRA AI Airlock will report findings in 2025, with five AI-systems taking part, including an LLM, which could precursor the first LLM to be approved as a medical device globally. FDA has continued publishing extensively on AI, with hints that AI medical scribes may become regulated in future.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, Martin Cook, Liz Smith or any other member in our Technology team.
Visit our AI law, regulation and policy blog (here) and sign-up to our AI newsletter (here). With thanks to Alice Gillie for research and drafting.