On 15 May 2024, the Department for Science, Innovation & Technology (DSIT) released a Call for Views on the Cyber Security of AI.
It focuses on the cyber security risks to artificial intelligence (AI) models and technology, as opposed to the safety and cyber security risks stemming from AI, which are wider issues.
At the centre of this Call for Views is a proposal for a voluntary AI Cyber Security Code of Practice, which is intended to in turn generate a new Global Standard for AI models.
Here we summarise the key points.
Cyber risks, and need for secure by design
DSIT has emphasised that cyber security is a key underpinning of AI safety, particularly as the technology around this evolves. This reflects one of the five key principles outlined in the UK’s AI Regulation White Paper (which we outline in further detail here); Safety, Security and Robustness. Accordingly, the UK Government has emphasised the need to support developers and deployers of AI systems in addressing the cybersecurity risks to their systems.
In particular, the Call for Views highlighted an apparent weakness of many organisations’ internal infrastructure, with 47% of organisations who use AI failing to have any specific AI cyber security practices or processes in place.
Accordingly, the intended approach to cybersecurity within the UK is a secure by design approach, which should safeguard businesses and communities from cyber threat from the outset.
This should be seen in context of the UK government’s £2.6 billion National Cyber Strategy, which we previously addressed in the following article. This strategy aims to protect and promote the UK online through creating security measures for upcoming technologies like AI.
Proposed Voluntary Code of Practice
As above, the Call for Views puts forward a new AI Cyber Security Code of Practice in line with this secure by design approach.
This code is based upon the NCSC’s guidelines for secure AI system development, published in November 2023, alongside the US Cybersecurity and Infrastructure Security Agency and other international cyber partners. The guidelines were co-sealed by agencies from 18 countries.
The AI Cyber Security Code of Practice is intended to follow the pro-innovation, principles-based approach to AI development established by the UK Government. It outlines 12 principles for ensuring the cybersecurity of AI systems.
These are:
- Raise staff awareness of threats and risks;
- Design your system for security as well as functionality and performance;
- Model the threats to your system;
- Ensure decisions on user interactions are informed by AI-specific risks;
- Identify, track and protect your assets;
- Secure your infrastructure;
- Secure your supply chain;
- Document your data, models and prompts;
- Conduct appropriate testing and evaluation;
- Communication and processes associated with end-users;
- Maintain regular security updates for AI model and systems; and
- Monitor your system’s behaviour.
Each of these Principles are variously applicable to Developers, System Operators and Data Controllers of AI models. The Principes expand on how stakeholders across the AI supply chain can take practical steps to protect their users.
Proposed Global Standards
In turn, this Code is intended to develop a Global Standard for AI Models.
AI Cybersecurity has increasingly been an important consideration on a global level, and an aspect that is specifically addressed in Article 15 of the EU AI Act (which we comment on further here).
Within this Call for Views, DSIT emphasises the need for a global approach in order to ensure consistency of technical standards and baseline cybersecurity requirements, as well as reinforcing the UK’s position as a global leader in developing cybersecurity for global technologies.
Accordingly, DSIT has emphasised that it will continue be involved in multilateral initiatives to continue this dialogue, such as the G7, G20, OECD and UN.
Next Steps
The Call for Views closes on 9 August 2024, which calls for feedback on the two-pronged Code of Practice and intention to develop a Global Standard. Accordingly, it welcomes feedback from industry and standards bodies on a global level.
Stakeholders in the AI supply chain are encouraged to provide specific feedback on interventions and recommend other policy options.
Throughout the 12 week call for views period, DSIT will organise workshops with industry bodies and meet international counterparts to promote their work. Additionally, it will continue to participate in UK and international conferences to present their approach globally.
Following the feedback, DSIT will publish an overview of key themes and outline the future direction of travel in regards to AI cybersecurity. DSIT will also begin to consider the development of a Global Standard, should there be support for this.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler, Martin Cook or any other member in our Technology team.
This blog was prepared by Victoria McCarron