The National Institute of Standards and Technology (NIST), a US Department of Commerce agency and one of the leading voices in the development of AI standards, has announced the launch of its Artificial Intelligence Risk Management Framework (the Framework).
NIST’s Framework will influence future US legislation and global AI standards. In this article we provide a high-level summary of the Framework.
What is the NIST Risk Management Framework?
NIST were required by the National Artificial Intelligence Initiative Act of 2020 to produce a resource to ‘organisations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems’. The Framework seeks to cultivate trust in AI technologies and to support organisations to ‘prevent, detect and mitigate AI risks.’
The Framework:
- is a voluntary guidance document for use by organisations developing, deploying, or using AI systems.
- reflects stakeholder input, having been developed over 18 months in close collaboration with private and public sector organisations, reflecting more than 400 formal comments from 240 organisations.
- is intended to be practical, to adapt to the landscape as AI technologies continue to develop, and to be operationalised by organisations in varying degrees and capacities so that society can benefit from AI while also being protected from its potential harms. “This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organisations to enhance AI trustworthiness while managing risks based on our democratic values,” said Deputy Commerce Secretary Don Graves. “It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”
- will evolve. In a live video announcing the AI RMF launch, Undersecretary of Commerce for Technology and NIST director Laurie Locascio said “Congress clearly recognised the need for this voluntary guidance and assigned it to NIST as a high priority.” NIST is counting on the broad community, she added, to “help us refine these roadmap priorities.”
How is the Framework applied?
The Framework is divided into two parts. The first part discusses how organisations can understand the risks related to AI and describes the intended audience. It then goes on to outline the characteristics of trustworthy AI systems.
The second part, which forms the core of the framework, describes four specific functions to help organisations address the risks of AI systems in practice:
- Govern: central to the Framework’s mitigation strategy, promoting a foundational culture of risk prevention and management for any organisation using the Framework.
- Map: recommended methods for contextualizing and identifying AI system risks.
- Measure: recommendations for assessing, analysing, and tracking identified AI risks.
- Manage: recommendations for allocating resources and prioritizing AI system risks.
NIST recommends that the Framework be applied at the beginning of the AI lifecycle and that diverse groups of internal and external stakeholders should be involved in ongoing risk management efforts.
NIST has also released a companion voluntary Framework Playbook, which suggests ways to navigate and use the Framework.
The Framework is relevant outside the US
The Framework should be seen as part of wider international efforts to address the potential benefits and risks of AI. Whilst each are taking different approaches to legislation (as can be seen in our horizon scanning piece), the US and EU have signed a number of agreements to develop trustworthy AI (for example, in this September 2021 EU-US Trade and Technology Council statement). The EU will pay close attention to the Framework as part of its own standards organisations setting AI-related technical standards.
In any event, NIST’s work is influential in both the US and internationally, and there is every reason to expect its Framework to help set standards globally. The EU AI Act includes requirements for high-risk AI systems, including around risk management systems and governance. Those requirements must take into account the ‘state of the art’ which may well include, amongst other things, international standards for managing risks associated with AI.
If you would like to discuss how current or future regulations and guidance impact what you do with AI, please contact Tom Whittaker or Brian Wong.
This article was written by Eve Hayzer.