The UK Parliament Communications and Digital Committee is launching an inquiry that will "examine large language models and what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks".

The inquiry will evaluate the work of Government and regulators, examine how well this addresses current and future technological capabilities, and review the implications of approaches taken elsewhere in the world.

The Committee's statement recognises that large language models pose specific opportunities but also specific risks:

  • Goldman Sachs has estimated generative AI could add $7 trillion (roughly £5.5 trillion) to the global economy over 10 years. Some degree of economic disruption seems likely, though many new roles could also be created in the process.
  • However, LLMs can produce fictitious (but believable) answers, in a way that is 'black box' ("where it is difficult to understand why a model decides on a course of action and what it might be capable of doing next"), and may make undesirable practices easier, such as fraud and disinformation.

The Committee's work should be seen in context, in particular for large language models:

There is a lot going on in the world of AI, in particular with (current and proposed) regulations, technical standards and governance.  Clarity will be welcome in many areas, but there is also a risk of divergence nationally and internationally .  Whether or not the various inquiries and potential regulatory regimes will produce clarity remains to be seen but the direction of travel towards greater intervention and scrutiny for AI systems is clear.

The Committee invites written contributions by 5 September 2023.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Martin Cook, or any other member in our Technology team.