Last updated: 26 March 2026

What Is Article 4 of the EU AI Act?

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence, formally entering into force on 1 August 2024 with a phased implementation timeline. Article 4 — one of the provisions that became applicable in February 2025 — establishes a mandatory AI literacy obligation for organisations that develop, deploy, or use AI systems. It reads:

“Providers and deployers shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in.”

The obligation is notable for what it does and does not prescribe. It does not mandate a specific training course, certification, or number of training hours. It does not define what “sufficient AI literacy” means in absolute terms. Instead, it imposes a process obligation: organisations must take active, documentable measures to assess what AI literacy their people need for the context they operate in, and then provide training that addresses those needs proportionately.

This proportionality principle is both an opportunity and a risk for compliance teams. It allows organisations to design AI literacy programmes that are genuinely fit for their context rather than following a generic template. But it also means that a generic AI awareness e-learning module completed by all staff — without any needs assessment, system-specific content, or evidence of behaviour change — is unlikely to constitute a defensible compliance position if challenged.

Does It Apply to UK Businesses?

The EU AI Act has extraterritorial reach: it applies to organisations that provide AI systems into the EU market or use AI systems in ways that affect people in the EU, regardless of where the organisation is headquartered. Post-Brexit, UK businesses are not subject to EU law in their domestic operations. But UK businesses that sell products or services into the EU, have EU-based customers or employees, or use AI systems to process the data of EU residents are within scope for Article 4.

The practical scope is broad. A UK financial services firm with EU clients using AI for credit assessment is in scope. A UK software company selling AI-enabled tools into the EU market is in scope as a provider. A UK professional services firm using AI for document analysis on behalf of EU clients is in scope as a deployer. The key question is not whether your organisation is UK-registered but whether AI systems you develop or use touch the EU market or EU individuals.

UK organisations that are uncertain about their exposure should seek legal advice specific to their situation. As a working assumption, any UK business with meaningful EU market activity that uses AI tools in its operations should treat Article 4 as applicable and build compliance accordingly. The enforcement risk for organisations that have taken reasonable steps to comply is substantially lower than for those that have done nothing.

What Compliance Looks Like in Practice

A defensible Article 4 compliance position has four components, all of which should be documented.

AI system inventory. You cannot assess AI literacy needs without knowing which AI systems your organisation uses and who interacts with them. The starting point for Article 4 compliance is an inventory of AI systems in use — including not just bespoke AI systems but the AI-enabled features within the SaaS tools, productivity applications, and platforms your teams use daily. This inventory should capture: the system, its function, the risk category under the AI Act framework, and the employee roles that interact with it.

Literacy needs assessment. With the inventory in hand, the needs assessment maps employee roles to AI systems and identifies the literacy requirements for each interaction. This does not require a formal assessment centre — a structured questionnaire covering AI knowledge, current use patterns, and self-assessed confidence is typically sufficient as a baseline, provided it is documented and the methodology is defensible. The assessment should distinguish between roles that interact with AI outputs (who need critical evaluation skills), roles that configure or manage AI systems (who need deeper technical literacy), and roles that have no AI interaction (who may need only basic awareness).

Training programme delivery. The training programme should address the gaps identified in the needs assessment, with content specific to the AI systems the organisation actually uses rather than generic AI concepts. Article 4 does not require organisations to train staff on every AI system in existence — it requires training relevant to the context in which AI systems are used. A customer service team using an AI chatbot needs different training from a finance team using AI for fraud detection. Both need training; the content should be different.

Documentation and records. Completion records, training content documentation, needs assessment results, and evidence of programme review should all be retained. In the event of an enforcement action, audit, or contractual due diligence query from an EU customer, these records are your primary evidence of compliance. The EU AI Act does not specify a retention period for Article 4 documentation, but aligning with your GDPR data retention policies as a minimum is a practical baseline.

Generic AI awareness training is not sufficient on its own.

Article 4 requires organisations to assess and address AI literacy needs in context — taking into account the specific AI systems in use and the roles of the people using them. A single all-staff awareness module with no system-specific content and no documented needs assessment process is unlikely to constitute a defensible compliance position. Build your programme around your actual AI inventory and role-specific needs.

Risk Categories and Literacy Depth

The EU AI Act’s risk-based framework is relevant to calibrating the depth of Article 4 training requirements. AI systems classified as high-risk under the Act — which include AI systems used in employment decisions, credit scoring, access to essential services, and several other categories — carry proportionally higher literacy obligations for the people operating them. An employee using a high-risk AI system to make or inform decisions about individuals needs more than basic awareness training; they need substantive education on how the system works, what its failure modes are, what obligations the Act places on deployers, and what their individual responsibilities are when the system produces an output they intend to act on.

For most UK businesses, the majority of AI tools in use will fall into the limited risk or minimal risk categories: productivity tools, generative AI writing assistants, basic automation. The Article 4 literacy requirement for these tools is proportionately lighter. But the proportionality principle cuts both ways: if an organisation is using AI systems in a high-risk context and has provided only the same basic awareness training as it gave employees who use AI for formatting documents, that proportionality gap is likely to be the focus of any enforcement or audit activity.

Building a Compliant Programme: Practical Steps

For UK employers approaching Article 4 compliance from scratch, a structured sequence works better than trying to build the programme all at once.

Start with the AI system inventory — this is the foundation and should take two to four weeks depending on organisation size. Engage IT, procurement, and department heads to surface AI tools that may not be visible to central L&D. Shadow IT AI use (employees using AI tools that have not been formally approved) is a common finding at this stage and needs to be addressed both as a compliance matter and as an L&D design input.

Run the needs assessment once the inventory is complete — targeting the roles identified in the inventory as AI-active. Segment by role family and by AI system type rather than trying to assess the whole workforce at once. The output should be a clear mapping of which roles need what level of AI literacy for which systems.

Design the training programme against the needs assessment outputs. For most organisations, this means at least three content layers: a baseline awareness module for all staff (covering what AI is, what the organisation’s AI policy is, and how to escalate concerns); role-specific application modules for AI-active roles (covering the specific systems they use, their failure modes, and their responsibilities); and a governance layer for decision-makers who rely on AI outputs (covering critical evaluation, risk assessment, and escalation obligations). The governance layer is particularly important for Article 4 compliance given the Act’s focus on human oversight of AI systems.

Document everything as you go. The programme design, the needs assessment methodology, the training content, and the completion records should all be maintained in a format that can be produced quickly in response to an audit or due diligence request. A training management system with robust completion tracking and evidence storage is significantly more defensible than email trails and spreadsheets.

The UK Regulatory Context

Within the UK, the government has adopted a sector-led, pro-innovation approach to AI regulation that does not currently include a direct equivalent of Article 4. Existing UK regulators (the ICO, FCA, CQC, and others) are publishing sector-specific AI guidance and expectations, but there is no single AI literacy mandate in UK law equivalent to the EU obligation. This means UK businesses without EU market exposure have more flexibility in how they approach AI literacy — but it does not mean the issue can be ignored. The ICO’s guidance on AI and data protection, the FCA’s expectations around AI in financial services, and the general duty of care obligations that govern employer responsibilities for workforce competence all create de facto AI literacy obligations that responsible organisations should be addressing.

For organisations with dual exposure — both UK operations and EU market activity — building the Article 4 compliance programme to the EU standard and applying it to the whole UK workforce is simpler than maintaining two parallel approaches. The compliance programme designed for Article 4 will exceed the current UK domestic standard, which provides additional protection if UK regulation converges towards the EU framework over time.

Build a documented, compliant AI literacy programme

TIQPlus gives L&D teams the platform to design, deliver, and evidence AI literacy programmes that meet Article 4 obligations — with completion records, needs assessment tools, and role-specific content delivery.

Book a demo

Sources & further reading

Share this guide