The EU Commission has announced that over 100 organisations have signed the AI Pact, a series of voluntary commitments to start applying the principles of the AI Act ahead of its entry into application and to enhance engagement between the EU AI Office and all relevant stakeholders, including industry, civil society and academia.
This is in the context that, whilst the AI Act came into force on 1 August 2024, there are various transition periods beforehand - such as for prohibited or high-risk AI systems - until it is fully in force after 2 years (see our practical flowchart to navigate the AI Act).
The commitments (explained below) are also relevant to those who have not signed the Pact because they are intended to reflect methods to comply with the AI Act and establish bet practices to adhere to the principles of the AI Act.
Voluntary commitments
By taking part in the Pact, signatories agree to make three ‘core’ commitments and, potentially, also other additional commitments.
The commitments mostly focus on transparency obligations and requirements for AI systems that are likely to classify as high-risk under the AI Act. The AI Pact encourages AI systems’ providers and deployers to commit to the pledges that are relevant to them, and to share their best practices, irrespective of whether these organisations are currently putting into service of placing into the EU market high-risk AI systems.
Further, if an organisation is able to meet the commitments, the AI Pact states that organisations may declare their intention to contribute to the best of their ability to the fulfilment of the commitments.
Core commitments
These are:
- adopt an AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act;
- carry out to the extent feasible a mapping of AI systems provided or deployed in areas that would be considered high-risk under the AI Act;
- promote awareness and AI literacy of their staff and other persons dealing with AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons affected by the use of the AI systems.
For more on AI literacy, see our overview here.
Additional commitments
Organisations can also consider whether they want to commit to additional commitments.
Potential commitments for organisations that develop AI systems, such as:
- put in place processes to identify possible known and reasonably foreseeable risks to health, safety and fundamental rights that could follow from the use of relevant AI systems throughout their lifecycle;
- develop policies to ensure high-quality training, validation and testing datasets for relevant AI systems;
- inform deployers about how to appropriately use relevant AI systems, their capabilities, limitations and potential risks;
- implement policies and processes aimed at mitigating risks associated with the use of relevant AI systems, in line with the relevant obligations and requirements envisaged in the AI Act, to the extent feasible;
- design AI systems intended to directly interact with individuals so that those are informed, as appropriate, that they are interacting with an AI system;
- design generative AI systems so that AI-generated content is marked and detectable as artificially generated or manipulated through technical solutions, such as watermarks and metadata identifiers;
- provide means for deployers to clearly and distinguishably label AI-generated content, including image, audio or video constituting deep fakes;
And potential commitments for organisations that deploy AI systems include:
- ensure that individuals are informed, as appropriate, when they are directly interacting with an AI system;
- inform with clear and meaningful explanations individuals when a decision made about them is prepared, recommended or taken by relevant AI systems with an adverse impact on their health, safety or fundamental rights;
- when deploying relevant AI systems at the workplace, inform workers’ representatives and affected workers.
The text of the Pact is here. These pledges are not legally binding and do not impose any legal obligations on participants.
Continued development
The AI Pact is lead by the EU AI Office, which 'plays a key role in implementing the AI Act - especially for general-purpose AI - fostering the development and use of trustworthy AI, and international cooperation.’
The AI Office' is developing the AI Pact as part of a two ‘pillar’ approach:
- Pillar I acts as a gateway to engage the AI Pact network for organisations that have expressed an interest in the Pact, encourages the exchange of best practices, and provides with practical information on the AI Act implementation process;
- Pillar II encourages AI system providers and deployers to prepare early and take actions towards compliance with requirements and obligations set out in the legislation.
If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom Whittaker, Brian Wong, Lucy Pegler, David Varney, Martin Cook or any other member in our Technology team.
For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.