Last updated: 29 March 2026
How to conduct a skills gap analysis: the step-by-step guide for US L&D teams
A skills gap analysis is the process of comparing the skills your organization currently has with the skills it needs to achieve its business objectives — and identifying where the difference is large enough to act on. This guide covers the full process: defining required capabilities, assessing current state, prioritizing gaps by business impact, and translating analysis into a training plan your CHRO and CFO will both support.
What a skills gap analysis is (and is not)
A skills gap analysis is not a training needs assessment. A training needs assessment starts with training and asks “what should we train on?” A skills gap analysis starts with business strategy and asks “what capabilities do we need, and where are we short?” Training is one response to gaps — alongside hiring, restructuring, or accepting the gap if the business cost of closing it exceeds the cost of living with it.
A skills gap analysis is also not a skills inventory. An inventory catalogs what skills exist in the organization. A gap analysis adds the future-state requirement and the prioritization step — without which an inventory is just data with no action attached.
Done well, a skills gap analysis produces three outputs: (1) a prioritized list of capability gaps with business impact quantified, (2) a clear build/buy/partner decision for each gap, and (3) a training investment plan aligned to strategy rather than to “what we’ve always done.”
Step 1: Define the future-state skill requirements
Start with your organization’s strategic plan, not your current job descriptions. Job descriptions describe what people do today. Strategy describes what the organization needs to be capable of 12—36 months from now. The skill requirements follow from the strategy.
Sources for future-state skills
- Business strategy and OKRs: If the strategy requires entering a new market, what capabilities does success in that market require?
- Technology roadmap: New systems and tools create skill requirements. An AI deployment in operations creates AI workflow skills needs before the deployment, not after.
- Competitive benchmarking: What capabilities do competitors have that you don’t? What capabilities are required to compete at the level you aspire to?
- Manager input on performance blockers: What are the specific skill gaps currently limiting team performance? This is a leading indicator for near-term gaps.
- Exit interview data: Patterns in exit interviews often reveal capability gaps in management or organizational systems that surveys don’t surface.
Defining skills at the right granularity
The level of skill definition matters. “Data literacy” is too broad to drive training decisions. “Ability to interpret and act on a Tableau dashboard” is specific enough to inform a training design. “AI fluency” is too broad. “Ability to write effective AI prompts for workflow automation tasks” is trainable.
Work with business unit leaders to define skills at the behavioral level: what would an employee with this skill do differently than an employee without it? That behavioral definition is your assessment target and your training outcome.
Step 2: Assess current capability
Four methods for assessing current skill levels, in decreasing reliability:
1. Validated skill assessments
Job-specific tests or simulations that produce an objective capability score. Most reliable but hardest to build. Appropriate for critical technical skills where the cost of a capability gap is high.
2. Manager assessment
Structured manager ratings against a defined skill rubric (typically a 4-point scale: developing, performing, proficient, expert). More scalable than assessments but subject to manager bias — particularly leniency bias where managers rate their teams higher than accuracy would warrant.
Mitigate bias with calibration sessions where managers discuss ratings in groups before finalizing them. Calibration tends to produce more accurate data than individual ratings.
3. Self-assessment
Employees rate themselves against defined skill descriptions. Fastest and cheapest to administer. Most subject to bias — typically in both directions: high performers often underestimate themselves; low performers often overestimate. Use self-assessment as a supplement to manager assessment, not as a primary data source.
4. Performance data as a capability proxy
Where performance metrics can be clearly linked to specific skill requirements, use performance data to infer capability. A sales team with a 40% win rate gap versus top performers can be assumed to have capability gaps in the skills that drive win rate differences — confirmed through performance analysis.
Coverage decisions
You do not need to assess every skill for every employee. Focus assessment resources on the skills with the highest strategic importance and where the gap is likely to be largest. A 2×2 matrix of importance × expected gap size guides coverage priorities.
Step 3: Identify and size the gaps
A gap exists where the current assessed capability falls below the required capability level. The size of the gap is the distance between them on your capability scale.
For a four-level scale (1=developing, 2=performing, 3=proficient, 4=expert), a team whose average score is 1.8 against a required level of 3.0 has a gap of 1.2 scale points. A team averaging 2.6 against the same requirement has a gap of 0.4 points.
Aggregating to a team or function level is useful for training planning. But don’t lose the individual-level data — a team average of 2.0 can conceal five employees at 3.0 and five employees at 1.0 who need very different interventions.
The output of this step is a gap map: each skill × each population group × current level × required level × gap magnitude. This is your prioritization input.
Step 4: Prioritize by business impact
Not all gaps are equally important to close. Prioritize using two factors: the strategic importance of the skill and the cost of the gap in current performance terms.
Strategic importance
High importance: the skill is directly linked to achieving a stated strategic priority or OKR. Medium importance: the skill supports enabling functions that are needed but not differentiating. Low importance: the skill is useful but not linked to current strategy.
Cost of the gap
What is the gap costing the organization today in performance terms? High cost gaps are where you can point to lost revenue, productivity drag, compliance risk, or quality problems. Low cost gaps are where the skill is needed but the absence isn’t yet visibly hurting performance.
Prioritize interventions for high-importance, high-cost-of-gap combinations. Defer or monitor low-importance, low-cost combinations. The middle cases require judgment about your organizational context.
Step 5: Determine close or buy decisions
For each priority gap, the options are: train existing employees (close), hire for the skill (buy), develop a partnership or contractor relationship (borrow), or redesign the work so the skill is not required (automate or eliminate).
Training is not always the answer. For skills with a very long development timeline relative to the business need, hiring is often faster. For skills needed intermittently, a contractor relationship is more cost-effective than building organizational capability. For skills that AI tools can now provide, redesigning the workflow may eliminate the gap entirely.
A skills gap analysis that doesn’t include the close/buy/borrow/eliminate decision for each gap is incomplete. L&D teams that present only a training plan from a gap analysis are leaving half the value on the table.
Step 6: Build the training plan
For the gaps you decide to close through training, the plan should specify: the target population, the required skill level, the current average level, the training modality, the timeline to proficiency, and the measurement approach.
Matching modality to gap type
- Knowledge gaps (employee doesn’t know): e-learning, documentation, microlearning. Fast to deploy, low cost, measurable via knowledge assessment.
- Skill gaps (employee knows but can’t do): practice with feedback, coaching, simulation, on-the-job application tasks. Requires more time and manager involvement.
- Behavior gaps (employee can do but doesn’t): habit formation, accountability structures, manager reinforcement. Training alone rarely closes behavior gaps — the environment must change alongside the training.
Sequencing for strategic impact
Sequence training in the order that maximizes business impact, not in the order that is easiest to deploy. The highest-priority gap should receive the first training investment, even if it’s more complex to develop.
Set measurement checkpoints: at 30 days, re-assess a sample of the target population against the skill. At 90 days, collect Level 3 (application) data. At 6 months, measure Level 4 (business impact). Close the loop by reporting these results back to the business leaders who defined the original skill requirements.
Tools and templates
A skills gap analysis can be conducted with nothing more than a spreadsheet. The core tool is a matrix with skills as rows, employees or employee groups as columns, and current capability rating in each cell — alongside a required-level column and a gap column calculated as the difference.
For larger organizations, dedicated skills assessment platforms (Degreed, 360Learning’s skills module, SAP SuccessFactors Skills) provide more structured collection and reporting. The risk with platform-based approaches is over-engineering: a platform that requires 6 months to configure and deploy often delays the analysis past the point where the strategic context that prompted it has changed.
Start with a spreadsheet. Add platform infrastructure once the process is validated and you understand what data you actually need. Platforms are good at scale; they are not better than spreadsheets at analytical design.
The four mistakes that waste the analysis
1. Starting with skills, not strategy
Building a skills taxonomy first and then trying to connect it to strategy produces a comprehensive catalog that no one uses. Start from business outcomes — what results do we need that we’re not getting? — and work backward to the skills that drive them.
2. Assessing everything equally
Organizations that try to assess every skill for every employee produce datasets too large to act on and exhaust manager goodwill in the process. A focused assessment of 8—12 priority skills for each major role group is more actionable than a comprehensive survey of 50 skills with low-quality data.
3. Treating the gap map as the output
A gap map is an input. The output is a prioritized action plan with ownership, timeline, and measurement plan. Gap analyses that end with a presentation of where the gaps are — without specifying what will be done about them — are academic exercises, not business tools.
4. No follow-through measurement
The value of a skills gap analysis compounds over time if you close the loop: train, re-assess, report gap closure, update the business on capability progress. Organizations that run gap analyses without re-assessment 6—12 months later cannot show whether training closed the gap — and the analysis becomes a one-time cost with no demonstrated return.
Sources and further reading
- SHRM, Skills-Based Talent Practices: A Guide for HR Professionals — practical framework for skills identification and workforce planning
- Deloitte, The Skills-Based Organization (2022) — research on skill architecture and gap prioritization in mid-large organizations
- ATD, Needs Assessment for Learning and Performance — foundational guide to skills needs analysis methodology