The Council of Europe’s European Audiovisual Observatory (EAO) has published a comprehensive report on the impact of artificial intelligence (AI) in Europe's audiovisual industries. 

The report highlights the transformative effects of AI technologies such as Claude, Midjourney, and DALLE on media production and distribution and highlights a raft of legal, ethical and regulatory challenges, including:

  1. protecting intellectual property; 
  2. data protection compliance;
  3. protecting personality rights;
  4. misinformation and disinformation. 

Opportunities and challenges

The report sets the stage for the use of AI in European audiovisual industries, outlining potential transformative advantages such as enhancing creativity, personalising content, and streamlining production processes. However, it also highlights significant legal challenges and potential drawbacks associated with AI use in the sector:

  1. Copyright and ownership: works solely generated by AI are unlikely to qualify for copyright protection in key territories such as the US and EU. The report contrasts this position with the UK, noting that UK copyright legislation already protects computer-generated works. However, the position under UK law is in fact more nuanced and is more likely to align with the position under EU law, requiring human creativity for copyright to subsist in a work. For further analysis see our previous blogpost: IP and generative AI: what you need to know, Harry Jewson 
  2. Training and infringement: question marks hang over the training of generative AI systems, with rightsholders bringing copyright infringement claims against major AI services in various jurisdictions. 
  3. Personality rights and transparency: in many jurisdictions the ability of generative AI to replicate voices and create digital doubles poses a new challenge for protecting the images and voices of actors, musicians, and other performers. For example, the UK does not have a standalone concept of image rights; the UK instead indirectly protects against the misuse of celebrity likenesses via the tort of ‘passing off’, which is more normally deployed against copycat brands.  
  4. Labour market impact: AI poses a risk of major job displacement in the audiovisual sector and has already contributed to writer and actor strikes in Hollywood. 
  5. Disinformation: AI's capacity to generate fake or baseless content, particularly deepfakes, raises concerns about misinformation, especially of a political nature, that may require robust regulation.
  6. Cultural diversity, ethics and media pluralism: AI's role in personalising content can inadvertently reinforce biases and limit exposure to diverse perspectives, requiring regulatory frameworks to promote diverse content consumption.

Future of AI regulation

Looking to the future, the report evaluates whether current AI regulations are equipped to handle the challenges posed by AI in the audiovisual sector. It discusses the absence of binding sector-specific regulations and examines whether existing legislation adequately addresses the sector's specific risks and challenges. The report also explores broader ethical implications, such as authenticity and the societal impacts of AI-generated content, emphasising the need for ethical guidelines as AI continues to evolve.

This post was written by Harry Jewson and Abigail Cropper. If you would like to discuss any of the issues raised in this article please contact Harry Jewson or Emily Roberts

For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com)

Subscribe to our Concept newsletter and receive the latest intellectual property legal updates, news and event invitations direct to your inbox.