The Stanford Institute for Human-Centered Artificial Intelligence has published their AI Index Report (2022) (the ‘Report’) which aims to track, collate and distil data relating to Artificial Intelligence (‘AI’) in order to provide greater understanding to policymakers, researchers and the general public of the developments within the field of AI.

The Report provides detailed statistics on AI and: 1) Research and Development; 2) Technical Performance; 3) Ethics; and 4) the Economy.  In this article we look at the fifth topic - AI and Legislation - the key takeaways for which are that there is:

  • an increase in the number of laws mentioning AI globally - the number of bills containing "artificial intelligence" that were passed into law grew from 1 in 2016 to 18 in 2021 in a selection of 25 countries (including the US, UK and major EU countries);
  • an increase in the number of AI-laws that may come into force in the US - the number of bills by state legislators in the US grew from 2 in 2012 to 131 in 2021;
  • an increase in the discussion of AI in US congress - 295 mentions of AI by the end of 2021 (half way through the session) compared to 506 in the previous session.

Increase in AI legislation globally? Or just an increase in the number referring to AI? 

The Report analysed laws passed in 25 countries that contain the words "artificial intelligence" from 2016 to 2021.  Together, those 25 countries passed 55 AI-related bills with a sharp increase between 2016 (1) to 2021 (18).

The United States dominated the list with 13 bills, starting in 2017 with 3 new laws passed each subsequent year, followed by Russia, Belgium, Spain, and the United Kingdom.

Looking just at federal legislation in the US, there was an increase in the number of proposed bills that relate to AI.  In 2015 one federal bill related to AI was proposed; in 2021, there were 130.  But there was not the same increase in the number of bills getting passed.  In 2021 "only 2% of all federal-level AI-related bills were ultimately passed into law".

The Report notes that the types of AI-related legislation "demonstrates the wide range of AI-related issues that have piqued policymakers' interest".  For example (emphasis added):

  • US - IOGAN ACT (Identifying Outputs of generative Adversarial Networks Act) - "This act directed the National Science Foundation to support research dedicated to studying the outputs of generative adversarial networks (deepfakes) and other comparable technologies."
  • UK - Supply and Appropriation (Main Estimates) Act 2020, c.13 - "A provision of this act authorized the Office of Qualifications and Examination Regulation to explore opportunities for using artificial intelligence to improve the marking and administration of highstakes qualifications."
  • France - Law N:2021-1485 of November 15, 2021, aimed at reducing the environmental footprint of digital technology in France - "This act sets up a monitoring system to evaluate environmental impacts of newly emerging digital technologies, in particular, artificial intelligence."
  • Canada - "A provision of this act authorized the Canadian government to make a payment of $125 million to the Canadian Institute for Advanced Research to support the development of a pan-Canadian artificial intelligence strategy."

However, that also shows that:

  • there is a range of types of AI-legislation caught in the statistics; from those which (in the Report's words) 'directed' or 'authorized' a government body to 'support research', 'support the development' of a strategy', to those which 'sets up a monitoring system' to those which 'authorized... a payment';
  • AI-related legislation can be a small part of a much bigger piece of legislation with broader objectives ('a provision of this act...') to being the target of the legislation itself.  In terms of not all AI-legislation and debates being equal, the examples above are different in nature and potential impact to the EU's proposed AI Act in which the focus is (in summary) to restrict AI-systems with unacceptable risk and ensuring that AI-systems of different levels of risk have proportionate measures to manage that risk (we have written about the EU's proposed AI Act before, including on recent proposed amendments here).

The statistics demonstrate that there is a growing number number of laws globally that refer to AI.  But for many of those laws, such as the examples above, it appears that AI is in passing or as part of a bigger objective.  That does suggest that legislative bodies around the world are recognising the potential of AI to assist in helping achieve policy goals and, in other examples, the need for AI usage to be within appropriate control frameworks.  However, the increase in the number of laws mentioning AI does not necessarily mean that those laws have a significant impact on how AI is used or regulated.  To understand the significance of AI legislation, each law needs to be analysed against the companies or industries it affects.  And, of course, there are laws which impact AI which do not refer to AI at all (for example, the Data Protection Act 2018).

If you would like to discuss the potential impact of AI legislation and policy (or legislation and policy which affects AI), please contact Tom Whittaker or Martin Cook.