Last updated: 17 April 2026
Shadow AI Is Already in Your Organisation
Before you have approved a single AI tool, before you have drafted an AI policy, before you have run a single training session — your employees are already using AI. ChatGPT, Copilot, Gemini, Claude, Perplexity. Consumer-grade generative AI tools that are free, accessible from a personal browser, and invisible to your IT department. This is shadow AI, and surveys consistently find that between 40% and 60% of UK employees are using AI tools their organisation has not officially sanctioned.
Shadow AI is not primarily a technology problem. It is a governance problem. The employees using these tools are not malicious — they are productive, and AI makes them more productive. The problem is that they are making consequential decisions without the context to make them safely: pasting client data into consumer AI systems that train on user inputs, generating content that contains AI hallucinations presented as fact, making procurement recommendations based on AI outputs that no one has verified. The risks are real and they are accumulating right now, in your organisation, without oversight.
The instinctive response — banning AI tools — does not work and is increasingly counterproductive. It drives usage further underground and signals to employees that the organisation is out of touch with how work actually gets done. The effective response is governance: clear policies, informed managers, and the organisational capability to oversee AI use appropriately.
The Difference Between an AI User and an AI Leader
An AI user knows how to interact with AI tools to get useful outputs. They understand prompting, they know which tools are good for which tasks, and they have developed intuitions about when AI outputs are reliable and when they need checking. This is valuable. Most employees should have some level of AI user capability.
An AI leader knows how to govern AI at an organisational level. They can:
- Evaluate AI tools from a data security, privacy, and procurement perspective — not just a productivity perspective
- Design and implement AI usage policies that are enforceable and proportionate
- Identify and manage AI-related risk — data breaches, copyright liability, regulatory non-compliance, reputational harm
- Brief senior stakeholders on AI decisions in business terms, not technical terms
- Lead cultural change: helping teams move from AI anxiety to AI capability
- Assess AI vendor claims critically — distinguishing genuine capability from marketing
The critical insight is that these are two completely different skill sets. You cannot get from AI user to AI leader by doing more prompting exercises. The gap requires structured learning in AI governance, ethics, risk management, procurement, and organisational change — topics that generic AI tools training does not address.
Why Most AI Training Misses the Mark
The majority of AI training currently available in the UK market falls into one of two categories: tools training (how to use specific AI products) or technical training (how to build or fine-tune AI models). Both are valuable in the right context. Neither addresses the governance gap.
Tools training is primarily appropriate for individual contributors who need to use AI in their day-to-day work. It is typically short — a few hours to a few days — and highly specific to particular products. It does not address risk, policy, procurement, or organisational strategy. A manager who has completed a ChatGPT course is better at using ChatGPT but no better equipped to govern how their team uses it.
Technical training is appropriate for developers and data scientists who are building AI-powered products. It is typically long, technical, and requires a STEM background. It is not appropriate for HR Directors, L&D leads, or business managers — and it does not address the organisational and governance dimensions of AI leadership.
The gap between these two categories is where most organisations are most exposed. They have employees who can use AI, and they may have engineers who can build AI, but they lack managers who can govern AI. The result is the shadow AI problem at scale.
Article 4 of the EU AI Act requires organisations to ensure that employees working with AI have adequate AI literacy. The UK's developing AI regulatory framework is moving in the same direction. Organisations that invest in AI governance training now are building the capability that will be mandated by regulation within the next 2–3 years — at lower cost and with better outcomes than organisations that wait for regulatory deadlines.
What AI Governance Training Looks Like in Practice
Effective AI governance training for managers and leaders is structured around the decisions they actually need to make. The curriculum of a well-designed AI leadership programme typically covers:
AI fundamentals for decision-makers. Not the technical internals of how AI works, but the conceptual models that inform governance decisions: what generative AI can and cannot do reliably, where hallucinations occur and why, how training data shapes outputs, and what "AI bias" means in organisational practice. This is the knowledge required to evaluate AI tool claims critically.
Risk mapping and classification. How to categorise AI use cases by risk level — from low-risk productivity tools to high-risk decision-support systems that affect employment, credit, healthcare, or legal outcomes. The risk classification frameworks that inform proportionate governance responses.
AI policy design and implementation. What an effective AI acceptable use policy looks like, how to get it adopted (not just published), and how to maintain it as the AI landscape evolves. How to handle the inevitable policy edge cases: an employee whose AI-assisted work is significantly better than their non-AI work, a contractor using AI in ways the policy does not address.
Data governance and security. The specific data risks that AI tools introduce: training data leakage, client data in consumer AI systems, AI-generated content that contains proprietary information, and the liability implications of each. How to set data handling rules that are practical rather than theoretical.
AI procurement and vendor assessment. How to evaluate AI vendor claims, conduct proportionate due diligence, negotiate contracts that include appropriate data handling and liability provisions, and identify red flags in AI vendor pitches. This is a directly applicable skill for any manager involved in technology purchasing decisions.
Leading cultural change. How to bring a team from AI anxiety or AI resistance to AI capability — including how to handle legitimate employee concerns about AI and job security, and how to frame AI adoption as augmentation rather than replacement.
Who Should Prioritise AI Governance Training
The organisations with the highest immediate need for AI governance training are those where AI tools are already in widespread use — which, in practice, means most organisations with over 50 employees. The specific individuals who most need this training are:
CTOs and CIOs who are accountable for technology governance but may not have formal training in AI-specific risk frameworks. Understanding how to govern AI is different from understanding how to manage traditional software systems — the risk profile, the rate of change, and the governance mechanisms are all different.
HR Directors who are increasingly making or approving decisions that involve AI tools — AI-assisted screening, performance analytics, workforce planning tools — without the governance frameworks to assess these decisions appropriately.
L&D leads who are being asked to build AI literacy programmes for the wider workforce, but who need AI leadership capability themselves before they can design effective AI governance into their training curriculum.
Senior managers in regulated industries — financial services, legal, healthcare. These are the environments where AI governance failures carry the highest regulatory and reputational risk, and where the gap between AI user capability and AI leader capability is most consequential.
The Funded Route: AI Leadership as an Apprenticeship Unit
For UK employers, AI governance training for managers is now levy-fundable through the Growth and Skills Levy. The AI Leadership unit (AU0002) is a 4–16 week programme specifically designed for the managers, directors, and senior professionals described in this article. It covers all of the governance, risk, procurement, and leadership dimensions of AI leadership — and it can be delivered with minimal disruption to day-to-day work.
Because the unit is levy-fundable, there is no additional cost for levy-paying employers. The programme is financed from the same levy funds that would otherwise expire unspent in the Treasury. This makes the business case unusually straightforward: addressing your organisation's AI governance gap at zero marginal cost, using funds you have already paid.
Sources & further reading
- EU AI Act, Article 4: AI Literacy — eur-lex.europa.eu
- GOV.UK: UK AI Opportunities Action Plan — gov.uk/government/publications/ai-opportunities-action-plan
- DSIT: AI and Data Governance in the Workplace — gov.uk/dsit