The AI literacy requirements of the EU AI Act – that, in short, staff using AI systems must have a sufficient level of AI literacy (Article 4) – came into force on 2 February 2025.  The European Commission’s EU AI Office has since published its Living Repository of AI Literacy Practices, which aims to encourage learning and exchange of information between those that need to comply with EU AI literacy requirements.

What is the Living Repository?

The repository is a record of ongoing AI literacy practices deployed by signatories of the AI Pact which will be updated over time (see our article on the AI Pact here)

The repository is split into three sections: planned; partially rolled-out; and fully implemented AI literacy practices. These sections cover a range of sectors (though almost half are within Information and Communications Technology as might be expected) and business sizes, between micro (1 to 15 employees) and large (250 employees or more).  Each record, ordered alphabetically by business, states whether the business is a provider and/or deployer of an AI system and why they have provided and/or deployed that AI system.  

The repository also identifies for each entry the target group of the relevant AI literacy practice, and documents the business’ approach to AI literacy, with reference to five key questions (with our emphasis):

"(1) How does the practice take into account the technical knowledge, experience, education, and training of the target group?

(2) How does the practice take into account the context in which the AI system(s) is/are used?

(3) What has been the impact of the practice so far and how does the organisation monitor such impact?

(4) Which challenges has the practice addressed and what issues is the organisation still facing

(5) Is the organisation planning to change and/or improve the practice?"

What are organizations doing with AI literacy?

The signatories who provided case studies for the repository consistently cited the use of training or e-learning programmes for staff, often split between different learning levels or topics, such as foundations or basics of AI, advanced AI, AI regulation or AI in the context of the business.  Some businesses have created separate, tailored programmes for technical and non-technical staff and some have made programmes voluntary whereas others are mandatory, or are planned to become mandatory. 

One signatory, travel agency Booking.com, explain that their AI literacy practices are 

“designed specifically for legal and public affairs professionals at the company, who tend to be highly educated in general, but less so in the particular field of computer science. While some members of the group have a high level of technical knowledge, we were aware that there was an opportunity to close gaps in knowledge for many ...”

Multiple business plan to implement a mentoring programme where more experienced employees can assist the development of their less experienced colleagues and others regularly assess the skills and competencies within their teams. 

The respondents generally state that they are ensuring training is contextual and sector relevant and some separate external client and in-house test cases meaning training programmes can be adjusted or tailored.  However, whilst tailoring training to be relevant to the respondent’s sector and business appears to be important to most, another signatory explains that its training on risks associated with AI is still comprehensive and covers AI risk in terms of “confidentiality, security, data privacy, intellectual property, fundamental rights, ethics and bias”.

Businesses are also working with external collaborators to further AI literacy. Two respondents have obtained (or have completed steps towards obtaining) ISO certification in AI Management (ISO 42001) whilst another provides its employees the opportunity to apply for a course on AI in the financial industry, developed with BI Norwegian Business School and others – offering up to 7.5 ECTS credits for completion (60 credits being equivalent to a year’s undergraduate study in the UK).

Do these practices set the standard?

Whilst the shared AI literacy examples provide a good insight into how others are attempting to comply with the EU AI Act, businesses are warned that simply replicating the same practices does not mean that they will be compliant.  Providers and deployers may have different actors in their AI value chains, different context for AI use and different AI systems altogether.  As one respondent noted within its “four-layered approach” (of Context, Sector, Use, Purpose) to planning and implementing AI literacy training:

 

‘‘          For instance, a Smart City use case involves public spaces where privacy is paramount, whereas a creative project targeting children requires age-appropriate content and safeguards. Understanding the context shapes our AI-Literacy training so participants learn the ethical, legal, and cultural nuances of data collection, user interactions, and content generation … Each sector (e.g., city planning vs. creative industry) has different regulations, audience expectations, and performance metrics … We tailor AI-Literacy content based on practical tasks the AI will perform – e.g., traffic analyses vs. edutainment. By clarifying the system’s function (e.g., object detection vs. generative content creation), participants can better grasp relevant technical pipelines and potential risks (e.g., potential bias in image recognition vs. ethical concerns in AI-driven storytelling) … Linking the system’s purpose to user outcomes (e.g., faster city planning decisions or more engaging environmental education) helps learners understand how and why AI is integrated into a project.                                                                                     ’’

The full EU AI Office Living Repository of AI Literacy Practices can be assessed here.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.

For the latest on AI law and regulation, see our blog and sign-up to our AI newsletter.

This article was written by Ryan Jenkins.