Last updated: 26 March 2026
What AI agents actually are — and why chatbot training isn’t enough
The AI tools most employees were trained on in 2024 and 2025 were primarily generative AI assistants: tools that respond to a prompt, produce an output, and stop. The human remains in control of every step. They write the prompt. They review the output. They decide whether to use it.
AI agents are fundamentally different. An agent is a software system that can pursue a goal across multiple steps, using tools, APIs, and decision logic to take actions — not just generate text. You give an agent an objective (“process these expense reports and flag anything over the policy limit”) and it plans and executes the steps itself: reading the files, applying the policy rules, writing to the expense system, sending notifications. The human is not guiding each step. They set the goal and review the outcome.
This distinction matters enormously for training. When a chatbot produces an error, a human catches it before anything happens. When an agent makes an error at step two of a seven-step workflow, that error propagates through steps three, four, five, six, and seven before a human sees the result. The consequences of errors are categorically different — and the skills employees need to catch them are correspondingly different.
How agents are appearing in UK workplaces in 2026
Agentic AI is not a future technology. It is embedded in platforms that many UK employers already pay for and deploy.
Microsoft 365 Copilot now includes agent capabilities that can perform multi-step tasks across Outlook, Teams, SharePoint, and Dynamics. An agent can monitor a shared inbox, categorise enquiries, draft responses, route to the right team, and log the interaction — without a human touching each message.
Salesforce Agentforce deploys agents across CRM workflows: qualifying leads, updating records, scheduling follow-ups, and escalating cases based on automated criteria. For sales and customer service teams, agentic AI is already part of the daily workflow.
ServiceNow and SAP are building agent capabilities into IT service management and ERP processes. Agents handle routine IT tickets, approve low-risk change requests, and process standard HR transactions such as leave requests and onboarding tasks.
The pattern across all of these is consistent: agents are taking over the structured, repeatable tasks that previously required a human to execute each step. This frees human workers for judgment-intensive work — but it also means that humans who previously executed those tasks now need to supervise agents doing them, which is a fundamentally different type of work requiring different skills.
What agentic AI means for job roles
The entry of agents into everyday workplace tools changes what many roles actually require. The change is not primarily about job elimination — it is about task composition. The tasks that are being automated by agents are typically the structured, process-following tasks within a role. What remains are the tasks that require judgment, relationship, creativity, and oversight. And new tasks are created: specifically, the supervision and governance of the agents doing the automated work.
A finance administrator whose role involved processing invoices now needs to supervise an agent processing invoices — which means understanding what the agent is doing well enough to catch when it is about to make an error, knowing when to override, and understanding how to escalate a case the agent cannot handle. These are not harder tasks than invoice processing, but they are different tasks that require different preparation.
The roles most affected in the near term are those with high proportions of structured, process-following tasks: finance operations, HR administration, customer service, compliance monitoring, and content production. The roles least affected in the near term are those where the core task is judgment, relationship management, or novel problem-solving — though even these roles will increasingly use agents as research assistants and workflow automation tools.
The three new skill types employees need
Building on established AI literacy (understanding what AI is, how it fails, and how to use it appropriately), working effectively with AI agents requires three additional skill types that most current AI training programmes do not address.
Agent supervision
Agent supervision is the ability to monitor what an agent is doing, understand whether it is behaving as expected, and intervene when it is not. This is qualitatively different from reviewing a chatbot output, because it requires understanding a process at the level needed to evaluate whether each step of that process is being executed correctly — not just whether a final output looks plausible.
Employees who previously executed a process manually typically have this process knowledge and can apply it to supervision. The training need is helping them translate that knowledge into a supervision mindset: where are the decision points in this workflow? What are the error conditions? What would a wrong-but-plausible output look like at each step? How do I know when to let the agent continue versus when to pause and review?
Workflow design
Effective use of agents requires the ability to specify a task clearly enough that an agent can execute it reliably. This goes beyond writing prompts — it involves thinking through a workflow structurally: what are the inputs? What are the decision rules? What are the exception conditions? What output format is required, and what happens if the agent cannot produce it?
Most employees have never been asked to specify their own workflows at this level of precision, because the implicit knowledge in their heads has always been sufficient. Making that implicit knowledge explicit, in a form that an agent can act on, is a learnable skill — but it requires deliberate practice with feedback, not just awareness that agents need clear instructions.
Error detection and escalation
Agents fail in ways that differ from human errors. They are consistent — if they make the same type of error once, they will make it every time the same condition arises. They are plausible — their outputs typically look correct even when they are not. And they are silent — they do not express uncertainty or flag that they are unsure. Training employees to detect agent errors requires specific practice with realistic examples where agents appear to be working correctly but are making subtle mistakes — the wrong date, the slightly incorrect calculation, the edge case the agent was not configured to handle.
How to build agent-readiness training
Agent-readiness training should not replace the AI literacy foundation — employees still need to understand what AI is, how it produces outputs, and what the data governance implications are. But it should extend that foundation with practical agent-specific content.
The most effective format is task-based: give learners a realistic agent workflow in a sandbox environment and ask them to supervise it, including scenarios where the agent makes an error that they need to catch. Abstract content about agents is far less effective than practice with agents in a realistic context. The goal is to build the specific monitoring habits and error-detection instincts that transfer to the live environment.
Duration is typically 4–6 hours of structured learning for employees whose roles will involve agent supervision, with shorter refresher content (30–60 minutes) as agents are deployed in new areas of the organisation. Manager-level training should include an additional component on how to assess whether agents are performing within acceptable parameters — the governance layer that ensures human oversight is genuinely functioning rather than just formally present.
If your AI literacy programme was built in 2024 or early 2025, it almost certainly focuses on prompt writing and output review — the skills for chatbot-style AI. These are necessary but not sufficient for the agentic AI your workforce is now encountering in Microsoft 365, Salesforce, ServiceNow, and other enterprise platforms. Review your programme against the agent-supervision, workflow-design, and error-detection skills before assuming your workforce is ready.
UK policy context: the AI Opportunities Action Plan
The UK Government’s AI Opportunities Action Plan, published in January 2025, explicitly identifies agentic AI as a priority area for UK economic opportunity. The plan’s focus on making the UK a “global AI hub” is partly predicated on UK organisations deploying AI more effectively than their international competitors — and agent deployment at scale is one of the clearest routes to the productivity gains the plan identifies.
For employers and training providers, this policy context matters for funding eligibility. Skills England’s emerging AI skills framework is expected to include agent-related competencies as eligible skills for levy and Skills Bootcamp funding. Organisations designing AI readiness programmes in 2026 should map their agent-supervision training to these emerging frameworks to ensure eligibility as the policy landscape develops.
Sources & further reading
- GOV.UK AI Opportunities Action Plan — gov.uk/government/publications/ai-opportunities-action-plan
- CIPD: AI and the world of work — cipd.org/en/knowledge/guides/ai-world-of-work
- Skills England report: Driving growth and widening opportunities — gov.uk/government/publications/skills-england-report