How Grading Works in EPA
Not every apprenticeship standard uses the same grading structure. Some earlier standards — particularly those developed before 2019 — are pass/fail only: the learner either meets the competency threshold or they do not, and there is no formal grade above pass. The majority of standards introduced or revised in recent years use a tiered grading model, though the tiers vary. Some use pass/distinction only; others use pass/merit/distinction. A small number use a numerical or banded scoring system that maps to a grade at the end.
The specific grading structure for any standard is published in its assessment plan on the IfATE website. This is not optional reading — it is the document that defines what the EPAO will assess and how they will score it. Providers who haven't read the assessment plan for every standard they deliver are preparing learners for an assessment they don't fully understand.
Where grading tiers exist, the grade is usually determined by performance across multiple assessment methods combined. A learner who achieves distinction in their professional discussion but only a bare pass in their portfolio-based interview may still achieve a pass overall, depending on how the standard weights each component. Understanding the weighting structure is essential to advising learners on where to focus their preparation effort.
Who Sets the Grade Criteria
Grade criteria are set by IfATE (the Institute for Apprenticeships and Technical Education) and the trailblazer employer groups that developed each standard, and published in the assessment plan. The EPAO is required to assess against those published criteria — they cannot add to them, modify them, or apply their own interpretation beyond what the plan specifies.
This means the grade descriptors are publicly available before the learner sits their EPA. There is no mystery about what distinction looks like: it is written down in the assessment plan. The consistent failure mode is that providers read the gateway criteria in the assessment plan but don't read the grade descriptors — so they prepare learners to be gateway-ready rather than distinction-ready.
Every tutor who delivers an apprenticeship standard should be able to describe the distinction-level performance indicators for that standard from memory, or at minimum know exactly where to find them. If they can't, they are not able to advise learners on what to aim for or identify gaps between a learner's current performance and the distinction threshold.
Assessment plan literacy is a quality issue
An Ofsted inspection team will ask tutors how they use the assessment plan to structure delivery and prepare learners for EPA. A tutor who is unfamiliar with the grade descriptors — or who has only read the KSB list — cannot credibly demonstrate that they are preparing learners to achieve their potential. Reading the assessment plan thoroughly is a baseline professional expectation, not an exceptional standard.
What Differentiates Pass from Distinction
While the specifics vary by standard and assessment method, there are consistent patterns in what distinguishes pass-level from distinction-level performance across the most common EPA methods.
Professional discussion
At pass level, the assessor is looking for evidence that the learner can demonstrate a KSB to the minimum threshold — they can describe what they do, give an example, and show they understand the relevant principles. At distinction level, the assessor is probing for significantly greater depth: the learner can analyse their own practice critically, identify what they would do differently and why, connect their approach to wider sector standards or regulation, and extend their thinking beyond the specific scenario they've been asked about. A distinction-level candidate can be challenged and pushed and can hold their ground with reasoning — not just recitation.
The practical implication for preparation is significant. Mock professional discussions that only test whether learners can produce an example of each KSB are preparing learners for a pass. Mock sessions that include follow-up probing questions — "why did you choose that approach rather than an alternative?", "what would you do differently if the context changed?", "how does this connect to the relevant sector guidance?" — are preparing learners to perform at distinction level under real assessment conditions.
Portfolio-based interview
At pass level, the portfolio is sufficient if it contains evidence that demonstrates each KSB at the threshold defined in the standard, and the learner can navigate the portfolio and speak to its contents in an interview. At distinction level, the assessor is looking for evidence that is analytically rich: the learner doesn't just describe what happened but analyses why, evaluates the impact of their choices, and demonstrates that their practice is reflective and developing rather than static. Distinction-level portfolios typically contain evidence that was written with depth and intention, not retrospectively assembled to cover KSBs at the last minute.
Practical observation
At pass level, the learner must be able to perform the observed tasks correctly and in line with the required standard. At distinction level, the assessor is typically looking for fluency, professional judgement in real-time, and the ability to adapt and respond to unexpected situations during the observation. A learner who performs a task adequately but rigidly — or who needs significant thinking time at each step — is unlikely to achieve distinction in an observation. Distinction-level performance looks like genuine expertise: the learner demonstrates not only that they can do the task but that they are comfortable, confident, and exercising professional judgement as they do it.
Written or online test
For standards that include a knowledge test, pass requires reaching the minimum score threshold. Distinction typically requires performing significantly above that threshold — often in the upper quartile of the marking scheme. Distinction in a knowledge test is primarily a function of preparation depth: how well the learner understands the underpinning knowledge, not just whether they can recall surface-level facts. Revision strategies that build genuine understanding — application questions, scenario-based practice, spaced retrieval — produce better test outcomes than last-minute cramming against bullet-point summaries.
How Providers Can Support Higher Grade Outcomes
The most significant lever providers have over EPA grade outcomes is not what happens in the final weeks before assessment — it is how they have structured evidence collection, reflection, and preparation throughout the programme.
Evidence quality over evidence volume
Many providers focus their IQA process on ensuring that each KSB has a sufficient number of evidence items. This is gateway thinking, not grading thinking. A distinction-level portfolio does not have more evidence — it has richer evidence. Each piece should demonstrate the KSB with depth: specific context, clear analysis of the learner's decision-making, reflection on outcomes, and links to relevant standards or sector knowledge. Tutors should be coaching learners on how to write their reflective accounts, not just how many to produce. The question isn't "do you have a piece of evidence for KSB S6?" — it's "does this evidence demonstrate S6 at distinction level?"
Developing depth of reflection
Reflection is a skill that needs to be explicitly taught and regularly practised. Many learners write reflective accounts that are descriptive rather than analytical — they recount what happened without evaluating it. Distinction-level reflection requires the learner to move from description to analysis: why did they make the choices they made, what was the outcome, what would they do differently, and what has this taught them about their own practice? Providers who build structured reflective writing exercises into programme delivery — not just as evidence submission requirements — produce learners who can sustain that analytical depth under EPA conditions.
Mock EPA sessions
A mock EPA session that is conducted against the actual grade descriptors, not just the competency threshold, is a fundamentally different exercise from a standard EPA prep session. For mock professional discussions, assessors should be using the distinction-level descriptors to frame their probing questions. Learners should receive feedback structured around the grade criteria — not just "good job" or "you need more on KSB B3" but "your response on B3 met pass level — here's what distinction looks like and how you could develop your answer to reach it." This kind of feedback requires the tutor to know the grade descriptors well enough to apply them in real time.
Structured preparation reviews
The progress reviews in the final quarter of a programme should shift focus from gateway readiness (are all KSBs covered at threshold?) to grade readiness (where is this learner likely to land, and what can we do to move them from pass to distinction before EPA?). This requires a different conversation: identifying which assessment components the learner is strongest in, targeting remaining preparation time on the highest-yield areas, and setting specific targets for evidence enhancement rather than evidence production.
The Role of the Learning Plan in Grade Outcomes
The individual learning plan (ILP) is typically treated as a compliance document: proof that the programme has been planned and agreed. Its potential role in driving grade outcomes is consistently underused.
A learning plan that is linked to grade criteria — not just competency thresholds — creates a fundamentally different programme for the learner. If the SMART targets in every progress review are written with the distinction descriptor in mind, the learner spends their programme building distinction-level capability, not passing-level capability. A target that says "produce a reflective account demonstrating KSB K4" is a gateway-level target. A target that says "produce a reflective account demonstrating KSB K4 that analyses the rationale behind your approach, evaluates an alternative you considered, and connects your practice to the relevant regulatory framework" is a distinction-level target.
The difference in learning outcome between these two targets, accumulated across a full programme, is significant. Providers who systematically write grade-aligned targets produce learners who arrive at EPA already performing at distinction level — rather than learners who need to be coached up in the final weeks.
Common Reasons Learners Achieve Pass When Distinction Was Possible
In post-EPA reviews, the same patterns recur when providers examine why a learner achieved pass rather than distinction despite being capable of more.
The most common is evidence that was sufficient but shallow. The learner had evidence against every KSB — the gateway was clean — but the evidence consisted of brief descriptive accounts rather than analytical reflections. There was nothing in the portfolio for the assessor to push back against, probe into, or use as a springboard for distinction-level questioning. The learner had ticked the gateway boxes without building the depth that distinction requires.
The second most common is underperformance in the professional discussion due to preparation gaps. The learner knew their subject — their evidence demonstrated genuine competence — but they were not comfortable being challenged on it in a formal assessment context. They had not practised being probed. When the assessor asked follow-up questions beyond the surface of their examples, the learner became uncertain and retreated to description. Mock sessions that end at the first correct answer train learners to produce correct answers, not to sustain professional discussions under scrutiny.
A third pattern is learner self-limiting beliefs about what they're capable of. Some learners approach EPA expecting to pass and are not psychologically prepared for distinction. This is a pastoral as much as a pedagogical issue — tutors who proactively frame distinction as a realistic and expected target for capable learners, from early in the programme, produce learners who aim for it and prepare accordingly.
Grade data as a quality indicator
Prentice tracks EPA grade outcomes against cohort characteristics, tutor, and standard — so providers can identify where their preparation is producing distinctions and where learners are systematically achieving pass when the evidence suggests they could do more. Grade distribution across a cohort is a quality signal, not just a compliance output.
Grading and Resit Rules
If a learner fails one component of their EPA, the process for resitting or retaking that component is defined in the assessment plan — not by the provider. Most plans allow a resit of the failed component without requiring the learner to repeat components they passed. However, the rules vary significantly: some standards require all components to be retaken if any is failed; others allow component-level resits with no cap on attempts; others limit funded resits to one.
Where a resit is permitted, it usually involves repeating only the failed assessment method. The learner retains the grades from passed components unless the standard specifies otherwise. In some standards, a resit can only achieve pass — the distinction grade is no longer available to a learner who has already failed a component. This is specified in the assessment plan and must be communicated to learners before EPA so they understand the full implications of performance in their first sitting.
The ESFA funds one resit or retake per learner. If a learner needs to resit beyond that, the cost typically falls to the employer or the learner. Providers should be clear with employers about this from the start of the programme, particularly for learners who are identified as at risk of not meeting the threshold on their first attempt.
A learner who fails EPA and is withdrawn from their apprenticeship without completing — rather than resitting — has a withdrawal recorded on their ILR. This affects provider performance data and may trigger a funding clawback if the circumstances of the withdrawal are deemed to reflect poor provider support. Supporting learners through a resit, where it is appropriate, is both better for the learner and better for the provider's data.
EPA Grading Preparation Checklist
- Assessment plan read in full — including grade descriptors, not just KSB list and gateway criteria
- Grade descriptors summarised and shared with learners at programme start
- ILP targets written to distinction level, not just gateway threshold
- Progress review agenda includes grade trajectory discussion in the final quarter of the programme
- IQA process checks evidence depth against grade criteria, not just gateway sufficiency
- Mock EPA sessions conducted against distinction-level descriptors with probing follow-up questions
- Learners receive grade-referenced feedback from mock sessions — not just pass/fail judgements
- Resit rules communicated to learners and employers before first EPA sitting
- Grade outcomes tracked across cohorts to identify systematic preparation gaps
- Learners identified as capable of distinction have a specific action plan to reach that level before EPA
Sources & further reading
- End-Point Assessment — IfATE: EPAO register, assessment plans, and grade descriptor documentation for all apprenticeship standards
- Apprenticeship Standards — IfATE: published assessment plans specifying grading structures, component weightings, and resit rules by standard
- ESFA Apprenticeship Funding Rules — GOV.UK: funding eligibility for resits and retakes, and provider obligations around EPA support