Last updated: 25 March 2026

The Counterintuitive Argument

The dominant anxiety about AI in the workplace focuses on the jobs and tasks it will take over. This is a real phenomenon, and it deserves serious engagement. But it produces a deeply misleading conclusion when applied to workforce development strategy: the idea that, as AI handles more cognitive work, investment in human skills becomes less important or less urgent.

The economics run in precisely the opposite direction. Scarcity drives value. When AI can handle the information-retrieval, pattern-matching, and routine analytical tasks that previously occupied significant portions of knowledge workers’ time, the residual human contribution — the work that AI cannot do genuinely well — becomes both more visible and more valuable. The skills that organisations struggle to recruit for are not being automated away; they are being amplified in importance by the automation of everything around them.

This matters for how L&D and HR teams allocate their training investment. Organisations that respond to AI adoption by shifting their training budget exclusively towards technical AI skills are making a mistake not unlike an organisation that responded to the spreadsheet by training all its accountants on Excel and stopping development of their financial judgment. The tool skills matter. But they are not the scarce resource that determines competitive performance — and treating them as if they are produces a workforce that is technically capable of using AI tools but increasingly poor at doing the things those tools cannot do.

The WEF’s Future of Jobs research consistently shows that the skills employers most struggle to develop — and most struggle to recruit for — are not technical ones. They are judgment, communication, creative thinking, and the ability to work effectively with incomplete or ambiguous information. CIPD data on hard-to-fill vacancies in the UK points in the same direction: the gaps employers report most acutely are in what are sometimes called “soft skills,” a label that significantly undersells their strategic importance.

The Six Human Skills That Become More Valuable With AI

Not all human skills are equally affected by AI adoption. The following six are the ones that research and practitioner experience most consistently identify as increasing in relative value as AI handles more cognitive work.

1. Critical judgment — evaluating AI outputs

AI systems produce outputs that are fluent, coherent, and confident in tone — regardless of whether those outputs are accurate. The skill of evaluating whether an AI output is reliable, in what ways it might be wrong, and whether it is appropriate to act on it is a genuinely new professional competency that most employees have not been trained in. It requires both domain knowledge — sufficient expertise to recognise when an AI claim is incorrect — and a mental model of how AI systems fail. Employees who have both are significantly more effective at AI-augmented work than employees who have one or neither. This is a trainable skill, but it requires practice with realistic AI outputs in the employee’s specific domain, not abstract discussion of AI limitations.

2. Contextual intelligence — applying patterns to specific situations

AI systems are trained on broad patterns and produce outputs that reflect those patterns. What they cannot do is apply a general pattern to a specific situation with full awareness of the contextual factors that make this situation different from the training distribution. A customer complaint that sits at the edge of standard categories, a team conflict with a specific history and interpersonal dynamic, an operational decision that depends on factors not captured in the data — in each of these cases, the general AI output is a starting point that requires human contextual intelligence to become useful. This skill is the synthesis of domain expertise, situational awareness, and judgment — and it is built through experience and reflection, not information transfer.

3. Empathy and relationship — what AI can simulate but not genuinely provide

AI systems can simulate empathetic communication with considerable sophistication. What they cannot do is genuinely understand the emotional state of another person, or bring to the interaction the meaning that comes from shared human experience. In roles where the quality of the relationship matters — leadership, coaching, client service, healthcare, education — the difference between simulated empathy and genuine human connection is consequential. As AI handles more of the transactional and informational aspects of these roles, the distinctly human relational dimension becomes the primary driver of quality outcomes. Training that builds genuine empathy — the ability to perceive, understand, and respond appropriately to the emotional experience of others — is not a soft skill optional. It is a core professional competency for any role where relationship quality matters.

4. Creative synthesis — combining ideas in non-obvious ways

AI is very good at recombining and extending patterns it has seen in its training data. It is not good at the kind of creative synthesis that draws on lived experience, cross-domain knowledge, and genuine intuition to produce something that is both novel and relevant in a specific context. The human contribution to creative work in an AI-augmented environment is not competing with AI on its terms — generating large volumes of varied content — but on distinctly human terms: the ability to identify the insight that matters in a specific situation, to connect ideas from domains that AI does not associate, and to exercise creative judgment about what is genuinely useful rather than merely novel. Building this capacity requires creative practice, exposure to diverse ideas, and reflective feedback — not the information-heavy training formats that dominate L&D budgets.

5. Ethical reasoning — especially for decisions with AI involvement

As AI systems are involved in more consequential decisions — recruitment screening, performance assessment, credit decisions, clinical support — the humans in those workflows bear responsibility for outcomes that AI was instrumental in producing. Ethical reasoning in AI-assisted contexts requires: the ability to identify when an AI-assisted decision raises fairness or rights concerns; an understanding of how AI systems can encode and amplify bias; and the professional confidence to override or escalate an AI recommendation when ethical judgment requires it. This is not a philosophical exercise — it is a practical professional skill that is increasingly required for roles involving consequential decisions. Training programmes that address ethical reasoning only in generic “values” modules are not preparing employees for this context-specific challenge.

6. Communication and persuasion — the human interpretation of AI-generated data

AI can generate analysis, summarise data, and draft reports. What it cannot do is interpret that analysis for a specific audience with a specific decision to make, in a specific organisational context, with all the stakeholder awareness that effective communication requires. The skill of taking AI-generated analysis and translating it into communication that influences the right people in the right way — knowing what to emphasise, what to contextualise, what to challenge, and how to frame the implications for people who did not commission the analysis — is increasingly the primary human contribution to analytical work. Communication training that builds this interpretive and persuasive capability is not a soft skill supplement to technical training. It is the skill that determines whether technical analysis actually influences anything.

Why These Skills Are Hard to Train

The reason human skills are systematically under-invested in by most L&D programmes is not because organisations do not value them. It is because they are genuinely harder to design training for, harder to measure, and harder to demonstrate ROI on — particularly in the short term — than technical skills training.

Human skills require practice, feedback, and time. They are not built by watching a presentation about empathy or reading an article about critical thinking. They are built through repeated experience of situations that require these skills, feedback from people who can recognise and articulate what good performance looks like, reflection on that feedback, and application in real work contexts over an extended period. The learning science here is unambiguous: complex skills that require judgment and social intelligence develop through deliberate practice and spaced repetition, not through information transfer. But most L&D budget is still allocated to content-heavy, one-time programmes that are well suited to information transfer and poorly suited to complex skill development.

This is the practical implication of taking human skills seriously as a training priority: it requires different programme designs, different timelines, and different measurement approaches. The investment is real, and the returns are real — but they manifest over months and quarters, not days. L&D teams that cannot make this case to their stakeholders will continue to underinvest in human skills development relative to its strategic importance.

Training Design Principles for Human Skills

Translating the above into practical training design requires departing from the standard L&D programme template. Four principles characterise effective human skills development.

Scenario-based. Every human skills module should be anchored in realistic scenarios drawn from the roles and contexts of the learners. Abstract discussion of critical thinking or empathy does not transfer to workplace behaviour. Scenarios that place learners in specific situations — where they must make a judgment call, respond to an emotional cue, or synthesise conflicting information — produce the practice that builds skill. The scenarios should be difficult enough to require genuine effort, and varied enough to prevent learners from pattern-matching the answer rather than actually developing the underlying skill.

Conversational. Human skills develop in dialogue. Cohort-based learning with structured peer discussion, coaching conversations, and facilitated group reflection produces significantly better outcomes for complex skill development than solo e-learning. This has resource implications — facilitated sessions cost more than self-paced digital content — but the alternative is investing in content that does not change behaviour. Blended designs that combine short digital preparation modules with regular facilitated practice sessions are the most resource-efficient approach.

Reflection-driven. The mechanism by which human skills develop through practice is reflection: the structured processing of what happened, why it happened, and what would be different next time. Building deliberate reflection time into programme design — through guided journals, structured debrief questions, or coaching conversations — is not a “nice to have” for human skills programmes. It is the learning mechanism. Programmes that skip reflection in favour of more content are optimising for the wrong variable.

Spaced over time. A one-day intensive on communication skills produces very little durable behaviour change. A structured programme of 60-minute practice sessions over 12 weeks, with real work application between sessions and reflection built in, produces substantially more. The evidence base for spaced practice over massed practice is robust across all skill types but is particularly important for complex skills that require habit formation. Programme design that spaces practice over time is not a convenience — it is the evidence-based design choice.

What NOT to Invest In

Redirecting training investment towards human skills requires, in most organisations, also making decisions about what to invest less in. Two categories warrant honest scrutiny.

Rote knowledge memorisation for content that AI can retrieve on demand. Large proportions of compliance training, regulatory training, and product knowledge training in most organisations are designed to ensure employees can recall information. In an environment where that information can be retrieved accurately by AI tools in seconds, the training case for memorisation is significantly weaker than it was. This does not mean that understanding is unnecessary — there is a meaningful difference between being able to recall a regulation and understanding what it means and how to apply it. But it does mean that training designs built around memory for factual content — the kind that e-learning quizzes test — are a declining return on investment as AI retrieval becomes ubiquitous.

Compliance-style one-day awareness courses that do not change behaviour. A significant proportion of organisational training spend goes on mandatory awareness programmes — often delivered as a full-day or half-day event covering a range of topics — that produce high completion rates and negligible behaviour change. The evidence that these formats are ineffective at changing complex behaviours is well-established. The case for continuing to fund them — beyond the compliance checkbox they satisfy — weakens further in an environment where that same budget could fund programme designs that actually build human capabilities. L&D teams that make this case honestly to their organisations are doing their stakeholders a service, even if the conversation is uncomfortable.

The Manager’s Specific Skill Requirement

Managers face a specific set of human skill demands in AI-augmented workplaces that are distinct from those of individual contributors and that require targeted training investment.

The first challenge is managing teams where individual members have different levels of AI adoption and different tool access. When some team members are producing AI-augmented outputs and others are not, the manager faces genuinely novel fairness and evaluation challenges: how to assess output quality fairly when the inputs differ, how to support less AI-fluent team members without penalising early adopters, and how to set consistent expectations in an environment where the definition of good performance is in transition.

The second challenge is building psychological safety in AI-uncertain environments. Many employees have genuine anxieties about AI — about job security, about whether AI-augmented work “counts” as their own, about making mistakes while using unfamiliar tools. Managers who have not been trained to recognise and respond to these concerns will either dismiss them (producing disengagement and resistance) or collude with them (producing an AI adoption environment where genuine concerns about governance are conflated with resistance). The skills required are active listening, honest communication about organisational direction, and the ability to separate legitimate concerns from resistance that the organisation cannot accommodate.

The third challenge is evaluation of AI-assisted outputs. Managers whose direct reports are using AI tools to support their work need to be able to evaluate those outputs with appropriate critical judgment — neither over-trusting the AI-assisted work (which removes the human accountability that the organisation depends on) nor systematically discounting it (which removes the productivity benefit of AI adoption). This is a specific critical judgment skill that many managers have not developed, and that standard management training does not address.

“The skills hardest to hire in 2026 are not technical skills. They are judgment, communication, and the ability to operate effectively in ambiguity.”

CIPD data on hard-to-fill vacancies consistently shows that the skills employers report as most difficult to recruit for are not technical — they are the human capabilities that determine whether technical skills produce organisational value. L&D investment strategies that reflect this reality — allocating meaningful budget to human skills development alongside technical AI skills — are better positioned than those that treat human skills as a default and technical skills as the priority.

6 Questions for Your Training Strategy

If you are designing or reviewing your training programme with human skills in mind, these six questions will surface the issues most likely to determine whether your investment produces results:

  • Are your human skills programmes scenario-based? Is learner practice anchored in realistic situations drawn from actual role contexts, or in generic exercises that are unlikely to transfer?
  • Do your programmes include structured reflection? Is there deliberate time built in for learners to process what they are experiencing and draw lessons from it — or is reflection treated as incidental?
  • Are they spaced over time? Are human skills modules delivered as sustained programmes over weeks, or as one-off events — and do you have evidence that your format produces durable behaviour change?
  • Do your managers have a specific AI-era management curriculum? Have you addressed the specific challenges of managing AI-augmented teams — fair evaluation, psychological safety, critical output review — or assumed that general management training covers it?
  • Are you measuring behaviour change, not completion? Do you have a mechanism for measuring whether human skills training is changing the behaviours it is designed to build — or are you relying on completion rates and satisfaction scores?
  • Are you honest about what you are de-prioritising? If human skills development is genuinely a priority, what existing training spend are you reducing to fund it? The organisations that make real progress on human skills development are the ones that make explicit trade-offs — not the ones that add human skills to an already full training calendar.

Training that builds skills, not just completion rates

TIQPlus gives L&D teams the platform to design, deliver, and measure training programmes that track behaviour change — not just activity. See how it works for human skills development programmes.

Book a demo

Sources & further reading

Share this guide