The US National Institute of Standards and Technology (NIST) has published a risk management framework for generative AI (here), including risk sub-categories and mitigations. These are mapped against, and to be read in conjunction with, the NIST Artificial Intelligence Risk Management Framework launched in January 2023 which is intended for voluntary use and to improve the ability of organizations to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

The framework defines risks that are novel to or exacerbated by the use of GAI, as well as suggested actions to help organisations govern, map, measure, and manage these risks.  In summary, the risks of generative AI are:

  1. Chemical, Biological, Radiological and Nuclear (CBRN) Information or Capabilities: Eased access to or synthesis of materially nefarious information or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) weapons or other dangerous materials or agents. 
  2. Confabulation: The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived. 
  3. Dangerous, Violent, or Hateful Content: Eased production of and access to violent, inciting, radicalizing, or threatening content as well as recommendations to carry out self-harm or conduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging or stereotyping content. 
  4. Data Privacy: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data. 
  5. Environmental Impacts: Impacts due to high compute resource utilization in training or operating GAI models, and related outcomes that may adversely impact ecosystems. 
  6. Harmful Bias or Homogenization: Amplification and exacerbation of historical, societal, and systemic biases; performance disparities between sub-groups or languages, possibly due to non-representative training data, that result in discrimination, amplification of biases, or incorrect presumptions about performance; undesired homogeneity that skews system or model outputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful biases. 
  7. Human-AI Configuration: Arrangements of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI systems. 
  8. Information Integrity: Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns. 
  9. Information Security: Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber operations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may compromise a system’s availability or the confidentiality or integrity of training data, code, or model weights.
  10. Intellectual Property: Eased production or replication of alleged copyrighted, trademarked, or licensed content without authorization (possibly in situations which do not fall under fair use); eased exposure of trade secrets; or plagiarism or illegal replication. 
  11. Obscene, Degrading, and/or Abusive Content: Eased production of and access to obscene, degrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults. 
  12. Value Chain and Component Integration: Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.

Each of the above risks reflects one or more of a generative AI output, object, or source of the risk (some are risks “to” a subject or domain and others are risks “of” or “from” an issue or theme).

The risks themselves vary as a result of multiple factors, including: stage of the AI lifecycle (risks can arise at any stage); scope of risk; source of risk; and time scale (appearing abruptly or over extended periods); the nature of the generative AI mode; and the use case.

The framework focuses on risks for which there is existing empirical evidence today. However, the framework also recognises that some generative AI risks are unknown, and therefore difficult to properly scope or evaluate, whilst some risks may be difficult to estimate given the range of stakeholders, uses, inputs and, outputs.  As a result, future updates to the framework may identify new risks or provide further details.

How is this useful? Organisations have a range of technical standards and frameworks they may want to consider for AI risk management.  Specialist advice will be needed.  AI regulation, such as the EU AI Act (Art. 8), may refer to the state of the art for AI; risk management frameworks such as this which identify known, evidence-based risks may inform what is understood by state of the art.  Further, NIST's framework is likely to be of use given NIST's role in the US for standard setting and role more generally regarding AI.  NIST describes itself as developing “measurements, technology, tools, and standards to advance reliable, safe, transparent, explainable, privacy-enhanced, and fair artificial intelligence (AI) so that its full commercial and societal benefits can be realized without harm to people or the planet.”  NIST established the US AI Safety Institute and companion AI Safety Institute Consortium.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian WongLucy PeglerDavid Varney, or Martin Cook

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.