The Information Commissioner’s Office (ICO) has recently initiated a public consultation series focused on the intersection of generative Artificial Intelligence (AI) and data protection laws. This move aims to address the growing concerns and questions surrounding the development and deployment of generative AI technologies.

Generative AI: Data Protection Risks

Generative AI refers to models capable of exhibiting a broad range of general-purpose capabilities, such as creation of music, images and videos. Their capabilities are founded on the extensive datasets used to train them. Accordingly, whilst they offer huge potential benefits, the technology also raises significant data protection and privacy risks. For example, there is widespread concern about the use of personal data to train AI tools, which increases the risks of data breaches and exploitation. 

Role of ICO in AI Regulation

The UK Government’s approach to AI Regulation is set out in the White Paper, which we previously wrote on here. The UK has taken a flexible approach, relying on existing regulators and regulatory frameworks to govern the use of AI within the UK with targeted regulation being considered. The ICO specifically is being relied on as the regulatory office overseeing data protection within the UK; in the context of AI, the ICO’s subject matter expertise and experience means that it acts as an influential body in assessing AI risks, and advising on frameworks to mitigate these risks. 

ICO Consultation: Key Areas of Focus

The ICO’s consultation series aims to provide clarity on several critical aspects of data protection law as they apply to generative AI. It has released a series of chapters outlining their emerging thinking in this respect, which include the following:

Lawful Basis for Training Models: Determining the appropriate legal grounds for using personal data to train generative AI models, particularly when data is scraped from the web. 

  • Purpose Limitation: Exploring how the principle of purpose limitation should be applied throughout the lifecycle of generative AI, from development to deployment.
  • Accuracy Principle: Establishing expectations for ensuring the accuracy of data used and generated by AI models.
  • Data Subject Rights: Clarifying how data subject rights, such as access and rectification, should be upheld in the context of generative AI.

The above chapters have each been addressed within calls for evidence which have now closed. The final call for evidence focuses on the allocation of accountability for data protection compliance across the generative AI supply chain and is open until 18 September 2024. The ICO will use the input received on these chapters to update their guidance on AI and other products.

Consultation Process and Takeaways

The ICO is inviting a wide range of stakeholders to participate in this consultation process. This includes developers and users of generative AI, legal advisors, consultants, civil society groups, and other public bodies with an interest in AI technology. The input gathered from these consultations will shape the ICO’s guidance on AI and data protection.

By seeking input from a diverse array of stakeholders, the ICO aims to ensure that the development and use of generative AI are aligned with data protection laws, ultimately fostering a responsible and trustworthy AI ecosystem.

If you have any queries around the ICO consultation series, or would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker or David Varney

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.

This passle was drafted by Victoria McCarron.