The National Cyber Security Centre (NCSC), the UK’s cyber security agency led by GCHQ, has recently released a new set of machine learning security principles to support the public sector and large organisations. Their aim is to help those developing, deploying or operating a system with a machine learning (ML) component and to address potential vulnerabilities in the ML systems.

In a related blogpost, “Kate S”, the NCSC Data Science Research Lead commented that the principles are to “bring awareness of adversarial ML attacks and defences to anyone involved in the development, deployment or decommissioning of a system containing ML”.

The principles

The principles are not designed to be a comprehensive framework, but to instead “provide context and structure” to help those developing ML systems to make “educated decisions about system design and development processes, helping to assess the specific threats to a system.”

The principles reflect multiple stages of a ML lifecycle, including: the prerequisites for inception, development and wider considerations; requirements and development; deployment; operation, valuation and re-evaluation; and retirement. The principles and themes can be summarised as follows:

  • Enable your developers: to understand the threats and mitigations so as to predict vulnerabilities.
  • Design for security: by identifying vulnerabilities in the intended workflows or algorithms and encouraging a secure culture.
  • Minimise adversaries' knowledge: by ensuring information is disclosed responsibly and in a way which recognises and mitigates system vulnerability.
  • Secure the supply chain and infrastructure: recognising the value of digital asset and ensuring the architecture is protected in transit or in situ.
  • Track the assets: driving efficient data and frameworks to monitor changes to an asset and its metadata throughout its life.

Takeaway

The NCSC principles are not to be taken as official guidance, but do indicate a direction of travel which is aimed at considering the lifecycle of potential AI and ML security concerns. Given the risks presented with this technology and its very wide applicability, the principles and their goals should form part of (or at least be considered within) IT security infrastructure and overall business planning. If nothing else they are a reminder of the care to be taken to secure resources which inherently evolve, and in turn could present unknown risks.

If you would like to discuss the potential impact of AI legislation and policy (or legislation and policy which affects AI), please contact Tom Whittaker or Martin Cook.

This article was written by Nick Mills.