Any organisation looking to procure, develop and/or deploy AI systems to generate AI content will want to consider the commercial and legal risks and issues that have bought watermarking into the spotlight, in particular, the ability to identify, track, authenticate and determine provenance of AI generated content. One approach is ‘watermarking’ the content, and the European Parliament Research Service has published a useful overview of generative AI and watermarking here.
In summary:
- Generative AI refers to technology designed to generate various types of new content in response to a user prompt. Examples include ChatGPT and Midjourney. The technology has the potential to transform industries and society by ‘boosting innovation, empowering individuals and increasing productivity’. However, it is becoming increasingly difficult to distinguish human-generated content from content produced by generative AI, potentially enabling illegal and harmful content.
- AI watermarking is a process of embedding into generative AI output a recognisable and unique signal that identifies the content as AI-generated to a computer or algorithm but is invisible to humans and can be traced. Different techniques have been developed for text, image, video and audio. Watermarking can be used to authenticate and monitor data (e.g. to ensure royalties are paid), protect copyright, and prevent the spread of AI-generated misinformation. However, there are limitations and drawbacks including potential false positives (i.e. an algorithm identifying AI-generated content as human-generated) and questions over robustness (e.g. a watermark being manipulated, removed or altered).
- AI-specific regulations may require some form of watermarking to be adopted for generative AI. For example, the EU AI Act provisionally agreed in December 2023 includes obligations on providers and users of AI systems to enable the detection and tracing of AI-generated content. The European Commission's draft standardisation request mandates the European Standardisation Organisations (CEN-CENELEC) to deliver a series of European standards by January 2025, including on transparency and information provision for AI system users. Also, the Biden-Harris Executive Order on AI (October 2023) includes a requirement for the US administration to develop effective labelling and content provenance mechanisms so end users can identify AI generated content.
- What is shown in the watermark will depend on the risks, the context and the purpose of the watermark being created/checked. This flexible, context-specific approach is recognised as part of consideration of ‘transparency’ of AI systems. For example, the UK's White Paper for AI regulation (March 2023, with the government's response anticipated early 2024) says:
Transparency refers to the communication of appropriate information about an AI system to relevant people (for example, information on how, when, and for which purposes an AI system is being used). Explainability refers to the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system.
Taking the above, watermarking may assist, in full or part, providing to relevant people 'how, when, and for which purposes an AI system is being used'. Whether it does, will rely on a case-by-case review and consideration of the relevant regulations and guidelines to see what is required.
If you would like to discuss the legal issues raised by AI, or how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, David Varney, Lucy Pegler, Martin Cook or any other member in our Technology team.
"Generative artificial intelligence (AI) has the potential to transform industries and society by boosting innovation, empowering individuals and increasingproductivity. One of the drawbacks of the adoption of this technology, however,is that it is becoming increasingly difficult to differentiate human-generated content from synthetic contentgenerated by AI, potentially enabling illegal and harmful conduct"
https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/757583/EPRS_BRI(2023)757583_EN.pdf