The House of Lords Communications and Digital Committee has published a report into large language models (LLMs) and generative AI, in particular on the “Goldilocks problem” of addressing both the opportunities and risks of LLMs. It is a detailed, evidence-based report covering a wide range of areas, including: LLM future trends; open and closed LLMs; the UK's pro-innovation AI regulation strategy; LLM risk; the international context; and copyright. It includes 10 core recommendations, which we set out below, plus others. The report is however worthwhile reading in full.
Committee recommendations
- "Prepare quickly: The UK must prepare for a period of protracted international competition and technological turbulence as it seeks to take advantage of the opportunities provided by LLMs.
- Guard against regulatory capture: There is a major race emerging between open and closed model developers. Each is seeking a beneficial regulatory framework. The Government must make market competition an explicit AI policy objective. It must also introduce enhanced governance and transparency measures in the Department for Science, Innovation and Technology (DSIT) and the AI Safety Institute to guard against regulatory capture.
- Treat open and closed arguments with care: Open models offer greater access and competition, but raise concerns about the uncontrollable proliferation of dangerous capabilities. Closed models offer more control but also more risk of concentrated power. A nuanced approach is needed. The Government must review the security implications at pace while ensuring that any new rules support rather than stifle market competition.
- Rebalance strategy towards opportunity: The Government’s focus has skewed too far towards a narrow view of AI safety. It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.
- Boost opportunities: We call for a suite of measures to boost computing power and infrastructure, skills, and support for academic spinouts. The Government should also explore the options for and feasibility of developing a sovereign LLM capability, built to the highest security and ethical standards.
- Support copyright: The Government should prioritise fairness and responsible innovation. It must resolve disputes definitively (including through updated legislation if needed); empower rightsholders to check if their data has been used without permission; and invest in large, high‑quality training datasets to encourage tech firms to use licenced material.
- Address immediate risks: The most immediate security risks from LLMs arise from making existing malicious activities easier and cheaper. These pose credible threats to public safety and financial security. Faster mitigations are needed in cyber security, counter terror, child sexual abuse material and disinformation. Better assessments and guardrails are needed to tackle societal harms around discrimination, bias and data protection too.
- Review catastrophic risks: Catastrophic risks (above 1000 UK deaths and tens of billions in financial damages) are not likely within three years but cannot be ruled out, especially as next‑generation capabilities come online. There are however no agreed warning indicators for catastrophic risk. There is no cause for panic, but this intelligence blind spot requires immediate attention. Mandatory safety tests for high‑risk high‑impact models are also needed: relying on voluntary commitments from a few firms would be naïve and leaves the Government unable to respond to the sudden emergence of dangerous capabilities. Wider concerns about existential risk (posing a global threat to human life) are exaggerated and must not distract policymakers from more immediate priorities
- Empower regulators: The Government is relying on sector regulators to deliver the White Paper objectives but is being too slow to give them the tools. Speedier resourcing of Government‑led central support teams is needed, alongside investigatory and sanctioning powers for some regulators, cross‑sector guidelines, and a legal review of liability.
- Regulate proportionately: The UK should forge its own path on AI regulation, learning from but not copying the US, EU and China. In doing so the UK can maintain strategic flexibility and set an example to the world—though it needs to get the groundwork in first. The immediate priority is to develop accredited standards and common auditing methods at pace to ensure responsible innovation, support business adoption, and enable meaningful regulatory oversight."
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Tom Whittaker, David Varney, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).
Baroness Stowell of Beeston, Chairman of the House of Lords Communications and Digital Committee, said: “The rapid development of AI Large Language Models is likely to have a profound effect on society, comparable to the introduction of the internet. That makes it vital for the Government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks. We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush.
https://publications.parliament.uk/pa/ld5804/ldselect/ldcomm/54/5402.htm