The AI Index Report 2023 has been published by Stanford University for a sixth year, with global analysis of trends in areas such as AI R&D, ethics, economy, education, policy and public opinion.

As the EU AI Act progresses and the the UK government published its White Paper with proposals and next steps on AI regulation, it is useful to take a global, longer-term view on how AI has been developing.  Here we identify some of the key takeaways relevant to AI regulation.

'Industry races ahead of academia'; continued significant private investment; the companies who have adopted AI 'continue to pull ahead'

"Until 2014, most significant machine learning models were released by academia. Since then, industry has taken over. In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia. Building state-of-the-art AI systems increasingly requires large amounts of data, computer power, and money—resources that industry actors inherently possess in greater amounts compared to nonprofits and academia"

"Global AI private investment was $91.9 billion in 2022, which represented a 26.7% decrease since 2021. The total number of AI-related funding events as well as the number of newly funded AI companies likewise decreased. Still, during the last decade as a whole, AI investment has significantly increased. In 2022 the amount of private investment in AI was 18 times greater than it was in 2013."

"The proportion of companies adopting AI in 2022 has more than doubled since 2017, though it has plateaued in recent years between 50% and 60%, according to the results of McKinsey’s annual research survey. Organizations that have adopted AI report realizing meaningful cost decreases and revenue increases."

This is of note to governments and regulators:

  • Governments are concerned about the concentration of types of AI technologies, or the necessary data and infrastructure, in relatively few private organisations.  Regulators are considering how and when proposed regulations should permit them to access people, documents and data from various AI stakeholders, such as the developers of large language models, to understand the risks and what has gone wrong.
  • Governments are also looking at building sovereign capabilities in large language models; the UK has introduced a 'new expert taskforce to build the UK’s capabilities in foundation models, including large language models like ChatGPT'. 
  • Regulators and governments are also concerned about the imbalance within markets of which companies are adopting and using AI, and the role of regulation to ensure that opportunities are equitably distributed. For example, the UK's White Paper states that the UK Government 'will continue to engage devolved administrations, businesses, and members of the public from across the UK to ensure that every part of the country benefits from our pro-innovation approach.'

'The number of incidents concerning the misuse of AI is rapidly rising'

"According to the AIAAIC [Algorithmic, and Automation Incidents and Controversies] database, which tracks incidents related to the ethical misuse of AI, the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities".

The AI, Algorithmic, and Automation Incidents and Controversies (AIAAIC) Repository is an independent, open, and public dataset of recent incidents and controversies driven by or relating to AI, algorithms, and automation. 

The number of newly reported AI incidents and controversies in the AIAAIC database was 26 times greater in 2021 than in 2012 (figures for 2022 are not yet available as incidents are vetted before publication).  The report also notes that historic incidents may be under-reported; this is an issue identified in an analysis of cancelled algorithmic-decision making projects in the public sector by Cardiff University.  Examples are across sectors and jurisdictions, including using AI to monitor prison inmate telephone calls in the US, and risk profiling gang members in London.  The range of AI uses and risks are reasons for growing national and international regulatory interest in AI.

'Policymaker interest in AI is on the rise'

"An AI Index analysis of the legislative records of 127 countries shows that the number of bills containing “artificial intelligence” that were passed into law grew from just 1 in 2016 to 37 in 2022. An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016."

"Of the 127 countries analyzed, since 2016, 31 have passed at least one AI-related bill, and together they have passed a total of 123 AI-related bills (Figure 6.1.1). Figure 6.1.2 shows that from 2016 to 2022, there has been a sharp increase in the total number of AI-related bills passed into law, with only one passed in 2016, climbing to 37 bills passed in 2022"

The analysis is based on references to 'artificial intelligence' within legislation (proposed or enacted) or in legislative debates.  However, not all references to artificial intelligence are equal: some are in the context of legislation which are focused on AI, some not; some relate to legislation which may have significant impact, some not.  They vary between whether they affect all public institutions or some, are at federal or state level, and across sectors.

Examples in the report reflect this: 

  • Spain - Right to equal treatment and non-discrimination bill - 'A provision of this act establishes that artificial intelligence algorithms involved in public administrations' decision-making take into account bias-minimization criteria, transparency, and accountability, whenever technically feasible.'
  • Alabama - Artificial Intelligence, Limit the Use of Facial Recognition, to Ensure Artificial Intelligence is Not the Only Basis for Arrest - proposal to prohibit state or local law enforcement using facial recognition as the sole basis for making an arrest or establishing probable cause in a criminal investigation.
  • Vermont - Act Relating to the Use and Oversight of AI in State Government - creates the Division of AI within the Agency of Digital Services to review all aspects of AI developed, employed, or procured by state government, and proposes a state code of ethics.
  • Philippines - Second Congressional Commission on Education (EDCOM II) Act - a provision of the act requires a congressional commission to review education in the Phillipines, part of which to involve consideration of how education can meet the challenges of artificial intelligence.

At the very least, the increasing number of references to artificial intelligence in proposed and enacted legislation reflect the growing role AI has within all parts of society.  This is reflected in other areas, for example:

  • policymakers discussions about AI.  The AI Index conducted an analysis of the minutes or proceedings of legislative sessions in 81 countries that contain the keyword “artificial intelligence” from 2016 to 2022.3  Mentions of AI in legislative proceedings in these countries registered a small decrease from 2021 to 2022, from 1,547 to 1,340. Spain topped the list with 273 mentions, followed by Canada (211), the United Kingdom (146), and the United States (138).
  • national AI strategies.  Canada officially launched the first national AI strategy in March of 2017; since then a total of 62 national AI strategies have been released, with the UK's in 2021.

The extent to which proposed legislation and policy will impact an organisation depends on the proposed laws and policies, the organisation, and their sector.  Identifying global trends is useful for the big picture but understanding impact on a specific organisation requires tailored analysis.  

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or Brian Wong.

Nestor Maslej, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Vanessa Parli, Yoav Shoham, Russell Wald, Jack Clark, and Raymond Perrault, “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2023.