The Information Commissioner’s Office recently announced that it had issued Snap, Inc and Snap Group Limited (Snap) with a preliminary enforcement notice. This was issued due to Snap’s potential failure to properly assess the privacy risks to children posed by its generative AI chatbot ‘My AI’. Snap could be forced to pay a fine amounting to millions of pounds. 

‘My AI’

‘MyAI’ is a chatbot embedded in the Snapchat application; it is designed to replicate human conversation. It allows users to customise its name, create a custom avatar for it, and add it as a member of a group chat with multiple users. It can interact with Snapchat users much in the same way as any other user, in being able to generate appropriate responses to information that is input, either via chat, or via images overlaid with text (‘Snaps’).

The chatbot draws on advanced AI technology from OpenAI, the company behind the language model ChatGPT. This was the first instance of AI deployment in a major messaging system within the UK.

Access was provided to UK Snapchat+ subscribers in February 2023. Subsequently, all UK Snapchat users were able to utilise the chatbot by April 2023. As of May 2023, MyAI had 21 million monthly users in the UK.

ICO Action

This follows a preliminary investigation carried out by the ICO, in which it found that the risk assessment Snap had conducted prior to launching ‘My AI’ had not adequately assessed the data protection risks posed by the generative AI technology.

Users between the ages of 13 – 17 were flagged as particularly vulnerable, with My AI processing their personal data through information that was input to the chatbot. Accordingly, the ICO is currently considering whether My AI is compliant with data protection rules. These rules include the Children’s Design Code, which ensures that digital services ‘likely’ to be accessed by children (defined as users under the age of 18) are compliant with standards aimed to safeguard these users against tracking and profiling

Outlook

Whilst the ICO’s findings are provisional, meaning a final decision on Snap’s potential data protection breach has not yet been made, it is notable that this follows a trend in recent enforcements against social media companies.

Indeed, this action is the latest in a series of penalties against social media companies for privacy breaches. We previously reported on the fines issued against Tik Tok by the ICO (here) and the DPC (here) respectively. Meta also recently faced a fine from the DPC on account of its misuse of personal data. In regard to AI chatbots, Google’s ‘Bard’, faced a delayed EU release following concerns raised by the DPC over its privacy protections.

In the preliminary enforcement notice against Snap, the ICO flagged a reminder previously issued to companies involved in generative AI to consider their data protection obligations. Accompanying this, the ICO has also issued advice on relevant issues to consider by developers and users of generative AI.

The outcome of this case is likely to be closely monitored in light of the heightened scrutiny around AI technologies and data protection in the UK.

If you have any questions or would otherwise like to discuss any issues raised in this article, please contact David Varney or a member of the Data Protection Team.

This article was written by Victoria McCarron