On 9 November, OFCOM released a consultation in respect of its new regulatory role under the Online Safety Act focusing on how user-to-user services and search services should approach their new duties relating to protecting people from illegal harms online. This was accompanied by a summary of OFCOM’s proposals, alongside first drafts of OFCOM’s risk assessment, Illegal Content Codes of Practice and associated guidance. Together, these underpin the illegal harms aspect of the regulatory regime.

This follows from the Online Safety Act (OSA) achieving Royal Assent on 26 October, which we reported on here. OFCOM has been appointed as the new online safety regulator in regard to the OSA. OFCOM’s approach to publishing its guidance falls in three phases, of which Illegal Harms Duties are Phase One. We comment further on this phased approach here.

 

Illegal Content Codes of Practice

The draft Illegal Content Codes of Practice are split into Codes for search services (i.e., services that offer search functionality abilities), and user-to-user services (i.e., services through which user-generated content is encountered by other users). The Codes set out a range of measures in areas including content moderation, complaints, user access, design features to support users, and the governance and management of online safety risks. These will apply variably depending on whether a service is larger or higher-risk. 

User-to-user services are required to:

  • take proportionate steps to prevent users encountering illegal content;
  • mitigate and manage the risk of offences taking place through their service;
  • mitigate and manage the risks identified in their illegal content risk assessment;
  • remove illegal content when they become aware of it, and minimise the time it is present on their service; and
  • explain how they will do this in their terms of service.

Search services are required to:

  • take proportionate steps to minimise the risk of its users encountering illegal content via search results;
  • mitigate and manage the risks identified in its illegal content risk assessment; and
  • explain how they will do this in a publicly available statement.

We set out other key measures set out in the draft Codes below. 

 

Combating Child Sexual Abuse and Grooming

OFCOM has emphasised that its priority as online safety regulator will be the protection of children. 

Under the draft Illegal Content Codes of Practice, OFCOM requires larger and higher-risk services to ensure the following by default:

  • children are not presented with lists of suggested friends;
  • children do not appear in other users’ lists of suggested friends;
  • children are not visible in other users’ connection lists;
  • children’s connection lists are not visible to other users;
  • accounts outside a child’s connection list cannot send them direct messages; and
  • children’s location information is not visible to any other users.

OFCOM has also proposed that larger and higher-risk services should;

  • use a technology called ‘hash matching’ – (a way of identifying illegal images of child sexual abuse by matching them to a database of illegal images), to help detect and remove child sexual abuse material (CSAM) circulating online; and
  • use automated tools to detect URLs that have been identified as hosting CSAM.

 

Fighting Fraud and Terrorism

The Codes propose steps for higher-risk services to combat fraud and terrorism as follows:

  • Automatic detection. Services should deploy keyword detection to find and remove posts linked to the sale of stolen credentials, such as credit card details. This should help prevent attempts to commit fraud. Certain services should have dedicated fraud reporting channels for trusted authorities, allowing it to be addressed faster.
  • Verifying accounts. Services that offer to verify accounts should explain how they do this. This is aimed at reducing people’s exposure to fake accounts, to address the risk of fraud and foreign interference in UK processes such as elections.

All services are required to block accounts run by proscribed terrorist organisations.

 

Core Measures

OFCOM is also proposing a core list of measures that services can adopt to mitigate the risk of all types of illegal harm, including:

  • Name an accountable person. All services will need to name a person accountable to their most senior governance body for compliance with their illegal content, reporting and complaints duties.
  • Teams to tackle content. Making sure the content and search moderation teams of services are well resourced and trained; set performance targets and monitor their progress against them; and prepare and apply policies for how they prioritise their review of content.
  • Easy reporting and blocking. Making sure users can easily report potentially harmful content, make complaints, block other users and disable comments. 
  • Safety tests for recommender algorithms. When services update these features, which automatically recommend content to their users, services that carry out tests on them must also assess whether those changes risk disseminating illegal content.

OFCOM’s Chief Executive, Dame Melanie Dawes, has released the following statement: “Regulation is here, and we’re wasting no time in setting out how we expect tech firms to protect people from illegal harm online, while upholding freedom of expression.”

The consultation will hear from industry experts, and close on the 23 February. OFCOM intends to publish a statement by the end of 2024 setting out final versions of the guidance and Codes of Practice, as subject to Parliamentary approval. Services should ensure they remain on top of developments to remain complaint with applicable regulations. 

This article was drafted by Victoria McCarron