The Ada Lovelace Institute, an independent research institute which examines ethical and social issues arising from the use of data, algorithms and artificial intelligence, has published a report titled ‘Regulate to innovate’ (available here) which outlines how regulation can provide clear, unambiguous, and constructive rules on the use of AI. 

The report provides an overview of the aims and challenges of UK AI regulation and reviews the ‘regulatory toolkit’ available to legislators for tackling the challenges posed by deeper AI integration, before analysing specific challenges.  

Here we pick out the recommendations for the UK's Office for AI's white paper on the UK Government's approach to AI regulation due early 2022. 

  1. New, clear regulations for AI

Firstly, the report argues the case for a new, clear regulatory framework for AI that reflects the principles laid down in the National AI Strategy.

One of the recommendations on this point may appear simple but is wrought with potential complexity – the need to establish a clear definition of AI systems that aligns with the Government’s approach to regulation more broadly. The precision of this definition would impact the scope of AI regulation and which uses and industries could be affected.  It may result in convergence of divergence in approaches with other jurisdictions depending on how they define AI.

The second recommendation calls for the Government to create a central function to “oversee the development and implementation of AI-specific, domain-neutral, statutory rules for AI systems”:

“These domain-neutral statutory rules could:

(1) set out consistent ways for regulators to approach common challenges posed by AI systems …

(2) include and set out a requirement for, and mechanism by which the central function must regularly revisit the definition of AI, the criteria for regulatory intervention and the domain-neutral rules themselves …

(3) provide a means of requiring individual regulators to attend to, and address the systemic, long-term impacts of AI systems …

(4) provide a means for regulators to address all stages of an AI system’s lifecycle …

(5) be intended to supplement, rather than replace, existing laws governing AI systems.”

A further recommendation is for the Government to develop sector-specific codes of practice for AI regulation, allowing for more bespoke handling of AI usage within a broader, national framework.

However, the report contends that “a fully vertical or compartmentalised approach to the regulation of AI would be likely to lead to boundary disputes, with persistent questions about whether particular applications or kinds of AI fall under the remit of one regulator or another – or both, or neither”, and therefore suggests a hybrid model, incorporating horizontal and vertical elements.

To put it another way, there will be cross-sector regulations and legislation and sector-specific codes of practice.

  1. Improved regulatory capacity and coordination

Secondly, the report outlines a need for improved regulatory capacity, noting that “AI systems are often complex, opaque and straddle regulatory remits” and that “for the regulatory system to be able to deal with these challenges, significant improvements will need to be made to regulatory capacity”.

In doing so, the Ada Lovelace Institute makes four specific recommendations:

  • more funding for regulators to deal with analytical and enforcement challenges posed by AI systems.
  • more funding and support for regulatory experimentation, and the development of anticipatory and participatory capacity within individual regulators.
  • develop formal structures for capacity sharing, coordination and intelligence sharing between regulators dealing with AI systems.
  • grant regulators the powers needed to enable them to make use of a greater variety of regulatory mechanisms.
  1. Improving transparency standards and accountability mechanisms

Lastly, the report calls for improvement in transparency standards in order to stimulate innovation and improve public confidence.

The Regulate to Innovate report suggests that the Government should consider how it can exert influence over international standards in order to improve the ‘transparency and auditability’ of AI systems, although acknowledges that this would not be a ‘silver bullet’.

The need for such transparency has already been the subject of discussion and action; the Centre for Data Ethics and Innovation published a review (available here) which addresses the issue of transparency in the context of AI (which we have considered here). Following this, the UK’s Cabinet Office’s Central Digital and Data Office published an algorithmic transparency standard (available here) for collecting information about how the Government uses algorithmic tools (which we have discussed here).

Further, the report argues that the Government should seek to maintain and strengthen legal mechanisms to protect and empower “journalists, academics, civil-society organisations, whistleblowers and citizen auditors to hold developers and deployers of AI systems to account”, which draws into focus how wide-reaching a mature and comprehensive AI regulatory regime may prove to be.

This report speaks directly to the third limb of the UK’s recent National AI Strategy (which we have written about here), which focuses on national and international governance of AI technologies.  The UK's AI Strategy must be seen in its international context and with its goal of making the UK an AI superpower.   Regulation is often seen as a key part of innovation, but it is seen by some as a potential barrier to innovation.  As a result, how and whether the recommendations are incorporated into the Office for AI's White Paper will depend on how they can be balanced against competing, and potentially diverging, factors from multiple stakeholders.

This article was written by Tom Whittaker and Liam Edwards.