On the 26 of March 2024, OFCOM published its Plan of Work for 2024/2025 outlining its areas of focus for the upcoming financial year. Alongside this and in response to the Government’s request for key regulators to publish an update on their approach to AI by 30 April 2024, OFCOM also published an updated Strategic Approach to AI (here), which we summarise below. 

Regulation of Services that use AI 

Within the publication, OFCOM: 

  • reaffirmed it is supportive of the Government’s pro-innovation AI principles (we provide a summary of the UK Government’s approach to regulating AI, here) and that it is keen to ensure that the benefits of AI are harnessed and the risks are managed effectively; 
  • highlighted that the introduction and implementation of the Online Safety Act “is an example of where similar principles have been actively considered by Parliament and underpin our legislative framework”. OFCOM recognise that the UK Government’s AI principles and OFCOM’s key outcomes for online safety “emphasise the importance of appropriate governance and accountability in keeping users safe” and that there are parallels between the key outcomes OFCOM would like to see and the AI principles (we provide more detail on the Online Safety Act here); and 
  • noted various examples of its regulatory powers and the regulatory outcomes it has achieved taking into account the impact of AI. For example, OFCOM recognises that AI can enhance the “sophistication” of scam calls and messages and that it has the power under the “general conditions” for telecoms providers to instruct providers to block access to those numbers or services on the basis of fraud or misuse. 

OFCOM’s Work to Date

  • OFCOM set out the key “cross-cutting” risks to consider in relation to AI which are of relevance to OFCOM’s remit. These are: 
    • Synthetic Media: AI tools can generate synthetic media, leading to potential harms such as child sexual abuse imagery, acts of terrorism, deepfakes, non-consensual pornography, and sophisticated frauds and scams. The challenge lies in distinguishing between synthetic and real content.
    • Personalisation: AI personalises content for users, which could amplify illegal and harmful content online causing echo chambers. It may also affect the discoverability of UK and public service content, potentially leading to price discrimination and a lack of transparency.
    • Security and Resilience: Advanced AI like Generative AI (Gen AI)could be used to develop virulent malware, identify network vulnerabilities, or cause system outages. Ensuring the robustness and security of AI models is crucial to prevent such risks.
  • OFCOM, in its capacity as a regulatory body, has actively engaged with AI-related issues particularly to address cross-cutting risks. OFCOM provided various examples of how its work has tackled AI specifically. The key overarching examples cited by OFCOM:
    • For Synthetic Media: OFCOM has published draft Illegal Harms Codes of Practice, launched a project on synthetic content detection, and commissioned research to understand attitudes towards Gen AI. They have also issued a Note to Broadcasters regarding synthetic media accountability under the Broadcasting Code.
    • For Personalisation: OFCOM has proposed measures for online safety, including the collection of safety metrics for recommender systems and have also explored methods for evaluating AI-driven recommender systems and discussed the role of online intermediaries in news consumption.
    • For Security and Resilience: OFCOM has monitored developments in GenAI for potential threats to network security and engaged with services covered under the Telecommunications (Security) Act to integrate GenAI into their systems, considering robustness and security.

Overall, OFCOM pledged to continue conducting horizon scanning work across the sectors it regulates to identify the risks and benefits of AI for UK citizens and to continue working with the Central Government AI Risk Function to ensure comprehensive monitoring of AI. 

Capability and Co-operation on AI-issues

In its Strategic Approach to AI, OFCOM outlines its capability to address AI risks: 

  • Staff: OFCOM employs over 100 technology experts (with approximately 60 AI experts) in their data and technology teams, including some with direct experience of developing AI tools. 
  • Strategic Research: OFCOM are building strategic partnerships with academic institutions specifically to share knowledge in relation to AI, including the Research Centre on Privacy, Harm Reduction and Adversarial Influence Online.
  • Data skills: OFCOM outlines a data strategy intended to develop a data culture and data literacy programme which will increase knowledge and understanding of AI across the organisation.
  • Collaboration: OFCOM actively participates in international forums and collaborates with other regulators including; the DCRF, Global Online Safety Regulators Network, the European Platform of Regulatory Authorities, the International Telecommunication Union and the Government. 

Planned AI Work for 2024/25

OFCOM outlined its AI agenda for the upcoming year, in summary:

Online Safety:

  • OFCOM will draft and consult on Codes of Practice to address illegal and harmful content. Research will focus on AI vulnerabilities, synthetic media detection, and automated content classifiers. There is also a plan to explore GenAI for content moderation and protecting children from GenAI-generated pornographic content.

Broadcasting:

  • Guidance will be issued to clarify broadcasters’ responsibilities regarding AI. Discussions with broadcasters will revolve around how GenAI can reduce production costs and its implications. OFCOM intends to assess the impact of AI-driven recommender systems on media plurality. 

Conduct a Public Service Media review: 

  • In the context of AI’s implications for broadcasting, the review aims to understand how AI developments could impact public service media content discoverability and viewing trends.

Telecoms:

  • OFCOM will monitor AI-related fraud and scams, engage with industry on AI-related cybersecurity risks, and investigate security compromise reports from services in scope of the TSA. They will also monitor AI standards development and AI’s impact on telecoms markets.

Crosscutting risks:

  • OFCOM plans to engage in domestic and international forums on AI issues, horizon scan for AI developments, and collaborate with the government on AI risk management. OFCOM aims to internally build AI capabilities and leverage AI across its operations.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David VarneyLucy Pegler, Martin Cook or any other member in our Technology team.

This post was written by Abbie McGregor and Liz Smith.