There is a problem - there are many types of AI risks, those risks can be significant, and many groups are concerned with those risks (such as auditors, policymakers, companies, and the public).  However, according to new research, there is no common framework to classify and discuss those risks.  To address that problem, MIT has launched an AI risk repository to help develop a shared understanding of AI risks through a live database of (currently) 777 risks from 43 different taxonomies (see the AI Risk Repository here).  

The AI Risk Repository has three parts:

  • The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers.
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur. These include:
    • causes by either i) humans, ii) AI or iii) other;
    • intentionality of i) intentional, ii) unintentional, or iii) other; and
    • timing - i) pre-deployment, ii) post-deployment, or iii) other
  • The Domain Taxonomy of AI Risks classifies these risks into:
    • seven domains - Risk domains - 1) discrimination, 2) privacy & security, 3) misinformation, 4) malicious actors & misuse, 5) human-computer interaction, 6) socioeconomic & environmental, and 7) AI system safety, failures & limitations; and
    • 23 subdomains e.g., “False or misleading information”.

What has the research found so far?

most of the risks (51%) were presented as caused by AI systems rather than humans (34%), and as emerging after the AI model has been trained and deployed (65%) rather than before (10%). A similar proportion of risks were presented as intentional (35%) and unintentional (37%)

What's next? Expect the living database to be updated with further research, new categories of risks, and the ability to track how those risks change over time (such as new risks, or changes to how frequently risks are discussed).

This is not the first or only attempt at an AI risk taxonomy. The US National Institute for Standards and Technology (NIST) produced a draft AI risk taxonomy in 2021 (here). However, having an updated and live database and taxonomy is likely to be welcome by many. Further, those interested in AI risk may want to cross-refer elsewhere; for example, there is the OECD AI incident monitor which includes a risk type and severity for each incident logged (see here).

Finally, the MIT database “is not presented as a definitive source of truth but as a common foundation for constructive engagement and critique and a starting point for a common frame of reference to understand and address risks from AI”.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid Varney, or Martin Cook. For the latest on AI law and regulation, see our blog and newsletter.