The UK Parliament Communications and Digital Committee is launching an inquiry that will "examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks".
The inquiry will evaluate the work of Government and regulators, examine how well this addresses current and future technological capabilities, and review the implications of approaches taken elsewhere in the world.
The Committee's statement recognises that large language models pose specific opportunities but also specific risks:
- Goldman Sachs has estimated generative AI could add $7 trillion (roughly £5.5 trillion) to the global economy over 10 years. Some degree of economic disruption seems likely, though many new roles could also be created in the process.
- However, LLMs can produce fictitious (but believable) answers, in a way that is 'black box' ("where it is difficult to understand why a model decides on a course of action and what it might be capable of doing next"), and may make undesirable practices easier, such as fraud and disinformation.
The Committee's work should be seen in context, in particular for large language models:
- The EU's proposed AI Act includes provisions specific to foundation models (large language models are a type of foundation model) (see our flowchart to navigate the AI Act here);
- The UK published its White Paper in March 2023 for "a pro-innovation approach to AI regulation", including specific consultation sought on large language models (see our overview here, and flowchart for navigating the White Paper here);
- The House of Commons Science and Technology Commons Select Committee have launched an inquiry into the 'Governance of artificial intelligence (AI)' which will include a review of the White Paper;
- The Competition and Markets Authority has launched an initial review of artificial intelligence – specifically of foundation models.
There is a lot going on in the world of AI, in particular with (current and proposed) regulations, technical standards and governance. Clarity will be welcome in many areas, but there is also a risk of divergence nationally and internationally . Whether or not the various inquiries and potential regulatory regimes will produce clarity remains to be seen but the direction of travel towards greater intervention and scrutiny for AI systems is clear.
The Committee invites written contributions by 5 September 2023.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Martin Cook, or any other member in our Technology team.
“Our inquiry will [...] take a sober look at the evidence across the UK and around the world, and set out proposals to the Government and regulators to help ensure the UK can be a leading player in AI development and governance." Baroness Stowell of Beeston, Chair of the Committee