The US National Institute of Standards and Technology (NIST), part of the US Department of Commerce, has published a taxonomy and terminology of adversarial machine learning attacks and mitigations.

The document is aimed at individuals and groups who are responsible for designing, developing, deploying, evaluating, and governing AI systems. It helps explain how attacks and mitigations are different depending on the type and purpose of AI system, whether predictive AI or generative AI. Importantly, it helps emphasise that when deciding how to procure, develop and deploy an AI system there are always pros and cons, and that any use of an AI system results in risks that can be mitigated but not eliminated.

As explained in the abstract

This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is built on surveying the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and lifecycle stages of attack, attacker goals and objectives, and attacker capabilities and knowledge of the learning process. The report also provides corresponding methods for mitigating and managing the consequences of attacks and points out relevant open challenges to take into account in the lifecycle of AI systems. The terminology used in the report is consistent with the literature on AML and is complemented by a glossary that defines key terms associated with the security of AI systems and is intended to assist non-expert readers. Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems, by establishing a common language and understanding of the rapidly developing AML landscape.

If you are looking for a glossary of AI terms as found in AI regulation, law and guidance, see the Burges Salmon AI glossary here.

If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact Tom Whittaker, Brian Wong,  David Varney,  Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (