Last updated: 25 March 2026
What IQA sampling is — and why it matters now
Internal Quality Assurance (IQA) is the process by which a training provider checks that its own delivery, assessment decisions, and learner evidence meet the required standard before that evidence goes anywhere external — whether to an End-Point Assessment Organisation (EPAO) or an assessment sampler. Sampling is the mechanism through which IQA is operationalised: the IQA selects a representative proportion of learners and reviews their portfolio, evidence, and tutor records against defined criteria.
IQA sampling has always been a regulatory requirement. ESFA funding rules require all providers to have a robust quality assurance process in place as a condition of their funding agreement. Ofsted inspects IQA directly under the Further Education and Skills inspection framework, assessing whether providers have effective systems to evaluate the quality of education and training they provide.
What has changed is the stakes. Under the current End-Point Assessment model, a weaker piece of on-programme evidence could be partially compensated for by a strong EPA performance — the final assessment provided a degree of cover. Under the reformed assessment model being introduced from 2026 onward, the EPAO's role shifts substantially toward sampling and evaluating on-programme evidence directly. There is nowhere left to hide. If your on-programme evidence is thin, inconsistently tagged, or poorly authenticated, that is now what the external assessor sees.
IQA sampling vs EPA: understanding the distinction
These two quality processes are frequently conflated, particularly in providers new to the apprenticeship market. The distinction matters for compliance:
- IQA sampling is the provider's own internal process. It happens throughout the programme — at induction, at key milestones, mid-programme, and before gateway. It is carried out by an IQA officer or internal verifier employed by, or contracted to, the provider. Its outputs belong to the provider and should inform tutor development and delivery improvement.
- EPA is conducted by an EPAO entirely independent of the provider. It happens at the end of the programme, after gateway sign-off. The provider has no role in the assessment itself — only in preparing the learner and ensuring the evidence is ready.
- EPAO sampling (under reform) is the EPAO's review of on-programme evidence as part of the reformed assessment model. It is external and happens after gateway. It is not a substitute for IQA — it assesses the quality of what the provider has already done.
A common and costly mistake is to treat IQA as preparation for EPA rather than as an ongoing quality mechanism in its own right. By the time a learner reaches gateway, the IQA process should already have verified the quality of their evidence multiple times. The gateway IQA check should confirm quality, not discover problems.
Regulatory and legal basis
The requirement for IQA is embedded in the ESFA funding rules for apprenticeships, which all providers must comply with as a condition of accessing apprenticeship funding. The funding rules require providers to maintain systems and controls that ensure the quality of training and assessment. Failure to maintain adequate IQA is a fundability risk — in serious cases, providers can face funding clawback or suspension.
Ofsted inspects IQA under the Further Education and Skills inspection handbook. Inspectors look at whether leaders and managers take effective action to monitor and improve the quality of education. Specifically, they assess whether providers:
- Have a systematic approach to evaluating the quality of teaching, learning, and assessment
- Use the findings from quality processes to drive improvement
- Ensure that actions taken following quality reviews are followed up and effective
An IQA plan that exists on paper but shows no evidence of being implemented — no dated sampling records, no tutor feedback, no improvement actions — is unlikely to satisfy inspectors and is a common trigger for a judgement of "requires improvement" in the quality of education judgement.
What a good IQA sampling plan looks like
A compliant and effective IQA sampling plan should specify four things: who will be sampled, how many, when, and against what criteria. The plan should be documented and version-controlled, with evidence of sign-off by a senior leader. It should cover all apprenticeship standards the provider delivers, not just the largest or most established ones.
Recommended sampling rates
ESFA does not prescribe a single minimum sampling percentage. However, the following rates represent sector best practice and align with what Ofsted expects to see in evidence:
- Established tutors delivering established standards: minimum 10% of learners per standard per term. For a tutor with a cohort of 20 learners on a single standard, this means at least 2 learners sampled per term.
- New tutors (in post fewer than 12 months) or tutors delivering a new standard for the first time: minimum 25% per term, rising to 50% in the first term of delivery.
- Learners formally flagged at risk — whether for evidence quality, OTJ compliance, or functional skills progress — should be sampled at 100% at the next scheduled IQA point.
- Gateway IQA: every learner recommended for gateway should have their evidence reviewed by the IQA before the gateway declaration is submitted. This is not optional.
Sampling criteria: random, targeted, and risk-based
A robust sampling plan combines three types of selection:
- Random sampling ensures all learners have an equal chance of being selected regardless of their performance or the tutor's confidence in them. This is important for detecting systemic issues that targeted sampling would miss.
- Targeted sampling focuses on areas of known or suspected risk: new tutors, new standards, learners who have had a break in learning, or cohorts where prior IQA has found evidence quality issues.
- Risk-based sampling responds to live flags — a learner whose OTJ log has not been updated in six weeks, a tutor whose previous samples showed poor KSB tagging, or a standard approaching gateway with learners who have not had a mid-programme sample.
Timing across the programme lifecycle
IQA sampling should be planned across the full programme timeline, not concentrated at the end:
- Induction (weeks 6–12): check that initial assessments have been completed, starting points are recorded, learning plans are in place, and OTJ logging has begun correctly.
- Mid-programme: check KSB tagging completeness, evidence quality, OTJ accuracy, progress review records, and SMART target quality.
- Pre-gateway: comprehensive review of all evidence against the standard's evidence requirements. Confirm functional skills status, OTJ total, and employer sign-off readiness.
- Post-intervention: if a tutor or learner has been subject to a corrective action following earlier sampling, re-sample after the agreed timescale to confirm the action has been effective.
What to look for in sampled evidence
IQA sampling is only as useful as the criteria applied. Reviewers should assess sampled evidence against a consistent checklist. The following are the most important evidence quality indicators:
KSB tagging completeness
Every piece of evidence should be tagged to one or more Knowledge, Skill, or Behaviour from the apprenticeship standard. Untagged evidence cannot be used to demonstrate competence. The IQA should check not just that tags have been applied, but that they are appropriate — a three-line observation note tagged to twelve KSBs is a red flag, not a strength.
Evidence authenticity
Can you be confident the evidence was produced by the learner? Is the observation record signed by the tutor with a date? Is the witness testimony from a suitably qualified workplace witness? Are professional discussion records detailed enough to demonstrate the learner's own understanding, rather than a transcribed answer sheet?
SMART target quality
Progress review records should contain SMART targets set at the previous review and reviewed at the current review. The IQA should check that targets are specific (linked to named KSBs or programme milestones), measurable, achievable, relevant to the standard, and time-bound. Vague targets such as "continue to develop communication skills" are not compliant and will be noted as a finding.
OTJ logging accuracy
The OTJ log should show a plausible and consistent pattern of off-the-job activity. The IQA should check that the running total is being updated, that activity types align with the standard, and that no individual entries look implausible in terms of hours claimed. Learners claiming 40 hours of OTJ in a week where they had normal working commitments should trigger a query.
Behaviour records
Behaviours are often the weakest strand in apprenticeship portfolios — they are difficult to evidence and frequently neglected until gateway. The IQA should explicitly check that behaviour evidence exists, that it goes beyond self-reflection, and that it is evidenced through observation records, witness statements, or employer feedback rather than learner assertions alone.
Documenting IQA findings and closing the loop
IQA sampling without a documented feedback and improvement loop is not IQA — it is an audit exercise with no organisational value. Every sample must result in a written record that includes:
- Date of the sample and the IQA officer conducting it
- Learner(s) sampled (anonymised for aggregate reports, but identified in the individual record)
- Tutor(s) whose work was reviewed
- Criteria applied and findings against each criterion
- Overall judgement: satisfactory, action required, or pass with development points
- Specific actions required and by whom
- Date by which actions must be completed and when re-sampling will take place
- Confirmation that feedback has been shared with the tutor, with the tutor's acknowledgement
The most important element is the feedback loop. IQA findings must be communicated to tutors in a timely way — not filed and forgotten. Where corrective actions are required, the IQA plan should schedule a follow-up sample within a defined period (typically four to six weeks for serious findings).
Common IQA failures Ofsted finds
Based on published inspection reports and sector intelligence, the following are the most frequent IQA weaknesses identified by Ofsted inspectors in apprenticeship delivery:
No written sampling plan
The provider conducts sampling activity but has no documented plan specifying rates, timing, or criteria. Inspectors cannot verify that sampling is systematic rather than ad hoc, and providers cannot demonstrate improvement over time.
Sampling only gateway-ready learners
Some providers concentrate IQA effort on learners who are approaching gateway, treating it as a pre-gateway quality check rather than an ongoing quality mechanism. This means that evidence quality issues are only identified when there is insufficient time to address them, and mid-programme delivery quality is never evaluated.
No feedback loop
IQA findings are recorded but not communicated to tutors in a structured way. Tutors do not know what was found, what they need to improve, or by when. The same weaknesses recur across multiple sampling rounds because nothing changes as a result of the IQA activity.
Sampling on paper only
A sampling plan exists and sampling records exist, but inspection reveals that the records are templated without genuine content — all learners receive the same comments, all findings are marked as satisfactory, and there is no evidence of differentiated review. This is treated by Ofsted as evidence that sampling is a box-ticking exercise rather than a genuine quality mechanism.
IQA officer lacks subject currency
The IQA officer reviewing evidence in a specialist technical standard (e.g., a Level 4 Engineering standard) lacks current occupational knowledge in that area. They cannot reliably assess whether the evidence demonstrates the required competence, and their judgements carry no credibility. Providers must ensure IQA officers have appropriate subject expertise for the standards they are reviewing.
Assessment reform: why IQA quality is now a primary assessment factor
The apprenticeship assessment reform programme being implemented from 2026 changes the architecture of how apprentices are assessed at the end of their programme. Under the reformed model for participating standards, EPAOs are no longer solely conducting separate end-point assessments — they are sampling and evaluating on-programme evidence that the provider has collected and quality-assured throughout the programme.
This is a fundamental shift. Under the current EPA model, an EPAO might assess an apprentice through a project, interview, and knowledge test — largely separate from the on-programme portfolio. A learner could have a thin or poorly evidenced portfolio and still achieve a strong EPA grade on the strength of their assessment performance. That buffer is removed under the reformed model.
What this means for IQA:
- On-programme evidence must be of a quality that can withstand external scrutiny — not just internal sign-off
- KSB tagging must be accurate and meaningful, because the EPAO's sampler will review the tagging as part of their assessment judgement
- Evidence authenticity becomes an external concern, not just an internal one — the EPAO will be checking whether evidence is credible
- Providers whose IQA has been tolerating weak evidence quality will find that deficiency exposed at assessment rather than absorbed by a separate EPA test
Providers should treat the introduction of reformed standards as an opportunity to review their IQA processes from first principles — not simply add a new checklist item to their existing gateway process.
How technology supports effective IQA sampling
Manual IQA processes — spreadsheets, paper sampling forms, email-based feedback — are time-consuming and create significant risk of gaps. Digital platforms that support apprenticeship delivery can substantially reduce the administrative burden of IQA while improving its consistency and auditability.
Key features that support IQA include:
- Automated evidence flagging: the platform flags evidence that has been submitted without KSB tags, or where the tag-to-evidence ratio looks implausible, so the IQA officer's attention is directed to genuine problems rather than routine review of compliant work.
- Dashboard views of IQA status: a live view of which learners have been sampled this term, which are overdue for a sample, and which tutors have outstanding IQA feedback to act on.
- Structured sampling forms: digital sampling records with consistent criteria fields ensure that every sample is reviewed against the same standard and that findings are recorded in a comparable format.
- Closed-loop feedback workflows: once an IQA record is completed, the system can automatically notify the relevant tutor and require them to acknowledge the feedback and record their response — creating an auditable feedback trail without administrative chasing.
- Aggregate reporting for SAR and QIP: because all findings are held in a structured format, it is straightforward to generate reports showing trends across tutors, standards, and cohorts — feeding directly into the quality improvement planning cycle.
Frequently asked questions
What is the minimum IQA sampling rate for apprenticeships?
There is no single ESFA-mandated percentage, but best practice — and what Ofsted expects to see — is a minimum of 10% of learners per standard per term for established tutors, rising to 25% or more for new tutors or standards being delivered for the first time, and 100% for learners formally flagged at risk.
How does IQA sampling differ from EPA?
IQA sampling is the provider's own internal quality check on on-programme evidence before it reaches any external assessor or EPAO. EPA is the independent assessment conducted by an EPAO at the end of the programme. IQA sampling happens throughout delivery; EPA happens at the end.
What does Ofsted look for in an IQA sampling plan?
Ofsted inspectors expect to see a written IQA sampling plan covering all standards, documented sampling activity with dates and outcomes, evidence that feedback has been given to tutors, and records of any re-sampling after corrective action. Sampling only gateway-ready learners, or having a plan on paper with no evidence it is followed, are both likely to generate a recommendation.
How does apprenticeship assessment reform change the importance of IQA?
Under the reformed assessment model, the EPAO's role shifts significantly toward sampling on-programme evidence. This means the quality of on-programme evidence becomes the primary vehicle for assessment. Weaknesses in evidence that EPA used to mask — poor KSB tagging, thin portfolio entries, inconsistent OTJ recording — become directly visible to external assessors. Robust IQA becomes essential rather than optional.