Last updated: 31 March 2026

Ofqual’s Role and Its AI Guidance

Ofqual (the Office of Qualifications and Examinations Regulation) regulates qualifications, assessments, and examinations in England. Its role is to set the conditions that awarding organisations must meet — not to run assessments directly. When Ofqual publishes guidance on AI in assessment, it is setting expectations for awarding organisations to act on, not rules that apply directly to training providers or learners. But the chain runs downward: what Ofqual requires of awarding organisations flows through into what EPAOs, awarding bodies, and training providers must do.

Ofqual’s position on AI in assessment (2024–25) is that AI use is not inherently prohibited, but awarding organisations must ensure that their assessments continue to be valid — that they accurately measure what they are designed to measure. If AI tools allow learners to produce work that appears to demonstrate competence without actually doing so, the assessment is no longer valid. Ofqual requires awarding organisations to assess this risk for their qualifications and take appropriate action.

The Three Assessment Integrity Challenges AI Creates

AI-generated work submitted as the learner’s own

The most common AI integrity challenge is learners using generative AI tools to produce written work — assignments, reflective accounts, case studies, reports — and submitting it as their own. This is functionally equivalent to plagiarism: the learner is representing work they did not produce as evidence of their own competence. The difference from traditional plagiarism is scale and sophistication. AI can produce thousands of words of plausible, subject-appropriate content in minutes, and the outputs are often fluent enough to pass surface-level review.

AI-assisted cheating in live assessments

For assessments conducted in controlled conditions — exams, observed practical assessments — the AI integrity risk is lower because the controlled environment limits tool access. For remote or unsupervised assessments, the risk is higher. Online exams and remote observation assessments need to consider whether learners can access AI tools during the assessment and, if so, whether that undermines validity.

AI-generated evidence in portfolio-based assessment

Portfolio-based assessment — which underpins most apprenticeship EPA and many vocational qualifications — is the area of highest AI integrity risk. A learner can use generative AI to produce reflective accounts, KSB-mapped statements, workplace project write-ups, and supporting documentation that appears authentic but was not produced by the learner. The assessment model depends on the authenticity of portfolio evidence. AI undermines that dependency if not managed.

This is not a hypothetical risk. EPAOs and IQAs in apprenticeship provision have identified AI-generated portfolio evidence as a growing quality concern since 2024. The patterns are recognisable to trained assessors — overly polished language, generic examples that lack the specific workplace detail that genuine reflection produces, suspiciously comprehensive KSB coverage without gaps — but are increasingly sophisticated as AI tools improve.

What Awarding Organisations Must Do

Ofqual’s conditions require awarding organisations to maintain assessment validity. In the context of AI, this means: reviewing their assessment designs for AI integrity risk; implementing detection or mitigation measures appropriate to their assessment type; updating learner regulations to address AI use clearly; and providing guidance to approved centres (training providers) on what is and is not permitted.

The specific measures different awarding organisations have taken vary: some have updated assessment designs to increase the proportion of observation and live discussion evidence; some are using AI detection tools as a supporting measure; some have strengthened declaration requirements; and some are redesigning written assessments to require learner-specific contextualisation that is harder to generate without genuine workplace experience.

EPA (End-Point Assessment) and AI

EPA in apprenticeships presents the full range of AI integrity challenges. Most EPA plans include portfolio-based evidence, professional discussion or interview, and observation of practice — each with different AI risk profiles.

Portfolio evidence: Highest risk. AI can generate plausible portfolio content at scale. EPAOs are responding by placing greater weight on employer witness statements, strengthening the professional discussion as a means of testing whether learners can speak to their evidence authentically, and providing more specific guidance to training providers on evidence quality expectations.

Professional discussion: Lower risk (AI is not present in a live discussion), but the discussion’s effectiveness as an integrity safeguard depends on assessors probing beyond what the written evidence says. An assessor who simply confirms that a learner can discuss their portfolio points is less effective at detecting AI-generated evidence than one who asks for specific, unrehearsed examples and contextual detail.

Observation: Lowest risk. Direct observation of practice in a real work context cannot be substituted by AI-generated evidence. The shift toward higher-weight observation in reformed EPA plans (part of the Skills England assessment reform programme) partially addresses AI integrity risk as a side benefit.

The assessment reform angle

Skills England’s EPA reform programme — moving 93 standards in Wave 1 from the current model to a sampling approach — includes assessment design changes that reduce the weight of portfolio documentation and increase the weight of direct evidence from workplace observation and employer report. This shift has AI integrity benefits alongside its primary purpose of reducing administrative burden and improving validity.

Training Provider Obligations

Training providers sit between the awarding organisation/EPAO and the learner. Their obligations around AI assessment integrity flow from both the awarding organisation’s centre requirements and their own quality assurance responsibilities.

Learner AI acceptable use policy

Every training provider should have a clearly written AI acceptable use policy for learners that addresses assessment specifically. The policy needs to cover: what AI tools are permitted during learning activities; what is prohibited in assessment and coursework; the declaration requirement (learners confirming work is their own); and the consequences of misuse.

The policy should be communicated at induction, revisited at each assessment submission, and easy to find in the learner portal or handbook. A policy that is published but not communicated does not change behaviour.

Designing evidence activities that resist AI gaming

The most effective long-term approach to AI assessment integrity is designing evidence activities that genuinely require learner-specific workplace experience. Evidence prompts that ask for specific incidents (date, people involved, what the learner did, what they would do differently) are much harder to generate convincingly with AI than open-ended reflective questions. Witness testimonies and employer observations that reference specific events are more robust than generic statements of competency.

Training providers who review their portfolio guidance to require more specific, contextualised evidence are both improving evidence quality and building AI integrity into the programme design.

Training assessors and IQAs in AI recognition

Assessors and IQAs need to develop the ability to recognise AI-generated evidence patterns and to probe for authenticity in professional discussions. This is a professional development priority. The patterns to look for include: unusually fluent, comprehensive coverage with no gaps; overly academic language in learners who typically write informally; examples that lack the specific operational detail that genuine workplace experience produces; inconsistency between the sophistication of written evidence and the learner’s oral discussion.

Declaration and monitoring

Implement a declaration process where learners confirm at each submission that the work is their own. Log declarations in your learning management system so they are available for IQA sampling and any subsequent investigation. Use available AI detection tools as a supporting indicator — not as a definitive test, since detection tools are imperfect — and ensure that detection findings trigger assessor review rather than automated decisions.

Assessment Integrity and AI Checklist

  • Learner AI acceptable use policy in place and communicated at induction
  • Policy addresses assessment and portfolio evidence specifically
  • Declaration process implemented for all assessment submissions
  • Evidence activity design reviewed to require specific, contextualised workplace incidents
  • Assessors trained to recognise AI-generated evidence patterns
  • IQA sampling plan includes AI integrity as a sampling criterion
  • Professional discussion guidance updated: assessors probe beyond written evidence
  • AI detection tools evaluated and implemented as supporting measure
  • Academic misconduct procedure updated to cover AI-generated work
  • EPAO guidance on AI integrity reviewed for all standards delivered
  • Awarding organisation’s updated learner regulations communicated to learners
  • Employer partners briefed on witness testimony expectations and AI integrity

Evidence management that supports assessment integrity

TIQPlus helps training providers manage portfolio evidence with the specificity and audit trail that Ofqual, EPAOs, and Ofsted expect — including learner declaration tracking and IQA sampling workflows.

Book a demo

Sources & further reading

Share this guide