Last updated: 31 March 2026
The DfE Generative AI in Education Guidance
In 2023 the Department for Education became one of the first government departments in the world to publish sector-specific guidance on generative AI in education. Updated in 2024, the guidance — “Generative AI in Education: Responsible Use Guidance” — applies to schools, colleges, and education settings in England and sits alongside (not instead of) existing data protection, safeguarding, and academic integrity obligations.
The guidance is structured around four principles that every education setting — including training providers — should treat as the policy foundation for their own AI position:
- Transparency: Staff, learners, and parents should be aware when AI tools are being used in educational processes. This does not require disclosure of every minor use, but AI-assisted decisions about learners, AI-generated materials used in teaching, and AI tools used in assessment must be visible and documented.
- Safety: AI tools must not compromise safeguarding, data protection, or the wellbeing of learners. This includes not inputting personal data about learners into consumer AI tools, not using AI in ways that could expose learners to harmful content, and maintaining GDPR-compliant data handling at all times.
- Appropriate use: AI should only be used in contexts where it genuinely supports educational outcomes. The DfE guidance explicitly cautions against AI use that substitutes for the professional judgement, relationships, and expertise of educators — the tool must serve the educational purpose, not replace it.
- Human oversight: Educators retain responsibility for all AI-assisted decisions and outputs. This is not a soft principle — it is the central commitment of the DfE framework. No AI output in an educational context can be used without professional review and accountability.
These four principles are not school-specific. They apply directly and equally to further education colleges, Skills Bootcamp operators, apprenticeship training providers, and any other provider regulated by or funded through DfE or ESFA. If your organisation delivers DfE-funded education in any form, this guidance represents the government’s position on how AI should be handled in your setting.
Ofsted’s Current Position on AI
Ofsted has been careful to position itself as a quality-of-education regulator, not a technology regulator. The current inspection framework assesses whether learners are making progress, whether teaching is effective, and whether assessment is valid and reliable. Ofsted does not specifically assess whether a provider uses AI or how — that is not their remit.
However, three areas of AI use create direct inspection risk that providers should understand:
Assessment validity
Ofsted inspectors assess whether assessment is valid — whether it genuinely measures learner knowledge and skill. If a provider’s assessment design is so vulnerable to AI generation that the submitted evidence cannot be reliably attributed to the learner, this is an assessment validity problem and an inspection risk, regardless of whether the inspector specifically mentions AI. Providers whose portfolio evidence could plausibly have been generated without learner engagement will struggle to demonstrate genuine learner progress during inspection.
AI-generated coursework without detection
Ofsted is increasingly aware that AI-generated submissions represent a growing challenge to assessment integrity across the education sector. While there is no specific Ofsted criterion on AI detection, an inspection that finds a provider has no processes for identifying AI-generated evidence — and no learner policy covering AI use — will raise concerns about governance of assessment.
Staff competence and curriculum quality
The Education Inspection Framework assesses the quality and substance of the curriculum. If AI tools are being used to design curricula or learning materials without appropriate quality assurance, this creates a risk to curriculum quality standards. Equally, if tutors are using AI to generate learner progress records without review, this undermines the reliability of progress monitoring — which inspectors do assess.
Ofqual’s Stance on AI in Regulated Qualifications
Ofqual regulates qualifications, assessments, and examinations in England. Its AI guidance position, developed through 2024 and into 2025, takes a principles-based approach: AI is not inherently prohibited in assessment contexts, but awarding organisations must ensure that their assessments remain valid and that AI use does not create unfair advantage or undermine the integrity of the qualification.
In practice, Ofqual expects awarding organisations to:
- Review their assessment designs to identify where AI could be used to generate credible but inauthentic submissions
- Implement detection or assessment redesign measures where AI risk is material
- Update regulations for learners (the rules that learners must follow) to address AI use explicitly
- Provide guidance to centres — training providers and schools — on implementing those rules
For training providers delivering regulated qualifications, this creates direct obligations. You are a centre. Your awarding organisation’s regulations are binding on you. If your awarding organisation has updated its AI regulations for learners and you have not communicated those updates to your learners, or have not updated your own policies to reflect them, you are potentially out of compliance with your centre agreement.
The Joint Council for Qualifications (JCQ), which coordinates policy across the major awarding bodies, has also published guidance on AI use in assessments. JCQ’s position is clear: AI tools are prohibited in the vast majority of formal assessments unless explicitly permitted by the awarding body. This includes GCSEs, A-Levels, and other regulated qualifications. For providers delivering these qualifications alongside apprenticeship or vocational programmes, the JCQ position applies to the regulated qualification components.
Teacher and Trainer AI Literacy
The Education and Training Foundation (ETF), which leads professional development for the further education sector, has published an AI in Teaching Framework for FE practitioners. This framework distinguishes between three levels of AI capability for educators:
- AI awareness: understanding what generative AI is, how it works at a conceptual level, and its implications for education
- AI proficiency: being able to use AI tools effectively for planning, resource development, and administrative tasks
- AI leadership: being able to lead AI policy and practice development in the organisation, advise colleagues, and contribute to sector-level discussions
The ETF framework is supported by funded CPD routes for FE practitioners, including subsidised online programmes and network learning. For training providers employing qualified teachers or tutors, engaging with ETF’s AI CPD offer is the most structured way to build staff AI capability systematically.
The DfE’s teacher training standards (the Initial Teacher Training and Early Career Framework, ITTECF) have also been updated to reflect the expectation that newly qualified teachers will have baseline digital and AI literacy as part of their professional formation. This matters for the training sector because:
- The next generation of tutors entering the workforce will have been trained with AI tools as a professional expectation
- Organisations that do not provide structured AI CPD for existing staff will find themselves with a skills gap between newer and longer-tenured tutors
- The standard of AI competence expected of educators is rising — providers need to plan for that now, not in two years
Apprenticeship EPA Integrity and AI
The apprenticeship sector faces a specific challenge that is distinct from schools: the portfolio-based evidence model that underpins many apprenticeship standards. Learners compile a portfolio of work-based evidence — reflections, project records, KSB-mapped case studies — that forms a core part of their EPA gateway submission. Generative AI can produce credible, structured KSB-mapped evidence at scale, without the learner having done the underlying work.
End-Point Assessment Organisations (EPAOs) are responding in different ways. The most common approaches include:
- Professional discussion formats: shifting assessment weight towards live professional discussions where the learner must demonstrate knowledge verbally, in real time, in ways that AI cannot generate for them
- Observation-based evidence: requiring that evidence is drawn from observed practice — either by the employer or the EPAO — rather than self-reported reflections
- Direct questioning: assessors are trained to probe the authenticity of portfolio evidence through direct questioning during EPA, looking for the depth of understanding that AI-generated text typically lacks
- Declaration requirements: EPAOs are updating their documentation to require learners to declare the extent of any AI assistance in portfolio preparation
The 2026 apprenticeship assessment reform — which introduces a sampling model for EPA across many standards — has AI integrity implications as well. A sampling model that reviews a subset of evidence rather than the full portfolio is more vulnerable to AI generation unless the sampled items are selected in ways that make inauthentic submissions detectable.
Training providers should proactively review the EPA plans for the standards they deliver, understand how EPAOs are responding to AI risk, and build that understanding into their learner induction and ongoing programme delivery.
T-Levels and AI
T-Levels are a post-16 qualification combining classroom learning with a significant industry placement component. The AI question for T-Levels has two dimensions.
First, the industry placement: learners spend 45 days (minimum) with an employer. Evidence from the placement — reflective logs, project outputs, employer assessments — forms part of the T-Level assessment. The same AI authenticity challenge applies here as in apprenticeship portfolios. DfE and awarding bodies have issued guidance to T-Level providers on requiring placement evidence to be contemporaneous, employer-verified, and supplemented by professional discussions.
Second, the core and occupational specialist components: T-Level assessments include externally set and marked examinations as well as employer-set projects. The employer-set project is the most AI-vulnerable element, and awarding bodies are actively reviewing project designs to reduce the scope for AI generation. For providers delivering T-Levels, staying current with awarding body updates on AI in employer-set projects is an ongoing obligation.
The Skills Pipeline: AI Literacy for School Leavers
AI literacy is increasingly embedded in the school curriculum as a core digital competency. The Digital T-Level and computing curriculum at GCSE and A-Level both incorporate elements of AI understanding — not just as a topic of study, but as a practical skill. This has a direct implication for the workforce training sector: employers in 2027 and beyond will be recruiting school leavers who have been formally taught AI literacy as part of their education.
This creates both opportunity and responsibility for training providers:
- Opportunity: apprenticeship programmes that build on school-level AI literacy and develop it into workplace-applicable AI proficiency will be compelling to both employers and learners
- Responsibility: providers must ensure that their AI literacy content is pitched correctly — not teaching skills learners already have, and genuinely extending their capability into the occupational context
- Curriculum design: apprenticeship standards that include digital or AI-related KSBs should be mapped against what learners are likely to arrive with, and learning programmes designed to build on that foundation rather than assume a blank slate
AI in Education Funding: EdTech, Jisc, and UKRI
Several funded programmes support AI adoption in education and training settings:
- EdTech Innovation Fund: DfE has funded a series of EdTech pilots through Innovate UK and its EdTech programmes, several of which focus on AI-assisted teaching, feedback, and assessment. Providers interested in piloting AI tools for educational use should monitor DfE’s EdTech programme pages.
- Jisc: Jisc, the sector body for digital technology in education and research, has developed an AI in Further and Higher Education Framework that provides practical guidance for FE colleges and training providers on AI governance, staff development, and learner policy. Jisc also offers funded sector support — including workshops, resources, and advisory services — for providers navigating AI adoption.
- UKRI grants for AI in HE: The UK Research and Innovation (UKRI) body has funded a range of AI in higher education initiatives through its Research England arm and the Turing Institute. For providers with HE partnerships or degree apprenticeship programmes, UKRI funding may be accessible for AI-in-education research and development.
- ETF CPD funding: The Education and Training Foundation administers funded CPD for FE practitioners, including specific AI in Teaching programme funding. Check ETF’s current funded CPD offers for eligibility.
What Training Providers Should Do
Translating the DfE guidance, Ofsted’s position, and Ofqual’s requirements into operational practice requires action across four areas:
Update learner AI acceptable use policies
Every training provider should have a written AI acceptable use policy for learners that covers: what AI tools learners may use and for what purposes; what is prohibited (submitting AI-generated work as their own); how learners should declare AI assistance; and what the consequences of misuse are. This policy should be communicated at induction, built into the learner handbook, and reviewed annually as the policy landscape evolves.
Build tutor and assessor AI literacy
Tutors and assessors need AI literacy at two levels: the ability to use AI tools effectively in their own practice (curriculum design, resource development, progress monitoring), and the ability to identify AI-generated learner work and respond appropriately. Engaging with ETF’s AI CPD programme and building internal AI literacy sessions for tutors is a practical starting point.
Engage with Ofqual and EPAO guidance
Providers should actively monitor their awarding organisations and EPAOs for updated AI guidance. This is not a one-time task — it is an ongoing responsibility. Designate a member of quality or curriculum staff to track AI policy updates from relevant awarding bodies and EPAOs, and ensure that updates are cascaded to tutors and communicated to learners.
Integrate AI skills into programme design where relevant
For apprenticeship standards and qualifications that include digital or AI-related KSBs, providers should ensure that AI skills are genuinely embedded in learning — not added as an afterthought. Where the occupational standard does not explicitly reference AI but the occupation clearly uses AI tools (data analysis, marketing, finance, customer service), consider how AI workplace competence can be developed as part of the programme even if it is not formally assessed.
Training Provider AI in Education Readiness Checklist
Use this checklist to assess your current position against DfE, Ofqual, and Ofsted expectations:
- We have a written learner AI acceptable use policy that is communicated at induction
- Our policy clearly defines permitted and prohibited uses of AI tools for learners
- Learners are required to declare AI assistance in assessed work
- Our tutors have received training on identifying AI-generated learner work
- Our assessment design incorporates elements that cannot be AI-generated (live discussion, observed practice, direct questioning)
- We have reviewed updated AI guidance from our awarding organisations and EPAOs
- Our quality assurance process includes review of assessment authenticity, including AI misuse
- We have a named lead responsible for tracking AI policy updates from DfE, Ofqual, and sector bodies
- Our tutors have access to funded AI CPD (ETF or equivalent)
- Our learner induction covers DfE’s four principles (transparency, safety, appropriate use, human oversight) in age-appropriate terms
- We have reviewed our data protection procedures to ensure AI tool use complies with UK GDPR
- No personal learner data is entered into consumer AI tools without a DPIA and appropriate data processing agreements