Last updated: 31 March 2026

The Current Reality: Policy Is Lagging Practice

CIPD research published in 2025 found that 44% of UK workers use generative AI tools at work on a regular basis — and that the majority of those workers have received no formal employer guidance about how to use them. ChatGPT, Microsoft Copilot, Google Gemini, and a growing catalogue of sector-specific AI tools are embedded in how a significant portion of the UK workforce operates. The policy gap is not hypothetical. It is live, and it is creating legal and operational risk right now.

The ICO has been explicit: organisations are responsible for how their employees process personal data, including processing through third-party AI tools. Employees inputting customer data, HR records, client information, or commercially sensitive material into consumer AI tools is a UK GDPR risk that falls on the employer to manage — not a risk that disappears because employees are acting individually.

A generative AI workplace policy does not need to be restrictive or technically complex. What it needs to do is give employees clear guidance on what is permitted, what is not, and what their obligations are when they use AI tools in their work. Without that guidance, the organisation cannot manage the risk and employees cannot know what is expected of them.

Why Every UK Employer Needs a Generative AI Policy

Four categories of risk drive the need for a formal policy.

Data protection risk

UK GDPR requires that personal data is processed lawfully, with appropriate security measures, and only transferred to processors under a compliant data processing agreement (DPA). Consumer AI tools — ChatGPT, the free version of Claude, standard Google Gemini — do not offer a DPA. When employees input personal data into these tools, they are transferring that data to a processor without a DPA, which is a likely breach of Article 28.

The risk is not hypothetical. The Italian data protection authority temporarily banned ChatGPT in 2023. The UK ICO has issued guidance making clear that organisations must assess AI tools for GDPR compliance before use. Several large UK employers have faced internal incidents where employee use of AI tools processed client or customer personal data in ways that created notification obligations.

Intellectual property risk

The copyright status of AI-generated work in the UK is genuinely uncertain. The Copyright, Designs and Patents Act 1988 provides limited protection for computer-generated works, but courts have not yet definitively addressed whether generative AI outputs receive copyright protection or who owns them. More immediately, if an employee uses an AI tool to generate a report, proposal, or creative work for a client and that client’s contract contains a warranty that all work is original — a standard clause — the employer may be in breach without knowing it.

Accuracy risk

Generative AI tools hallucinate — they generate confident, well-formatted content that is factually wrong. An employee who submits AI-generated content without verification to a client, regulator, court, or decision-maker is creating professional liability risk. In regulated sectors (legal, financial, medical), unverified AI outputs in professional communications can constitute a regulatory breach.

Confidentiality risk

Most consumer AI tools use inputs to improve their models unless the user opts out — a setting that requires enterprise accounts and active configuration. Employees inputting commercially sensitive strategy documents, unpublished financial results, or client negotiating positions into consumer AI tools are potentially disclosing that information to the AI provider’s training pipeline.

UK GDPR and personal data in AI inputs

The ICO’s guidance on generative AI and data protection sets out the obligations clearly. Before deploying any AI tool that processes personal data, organisations must: identify the lawful basis for processing; complete a Data Protection Impact Assessment (DPIA); ensure an appropriate data processing agreement is in place with the AI provider; and confirm that data will not be used for model training without consent.

For consumer AI tools without enterprise agreements, the simplest compliant position is to prohibit personal data input entirely. For enterprise tools with DPAs (Microsoft Copilot with an enterprise Microsoft 365 agreement, for example), a DPIA is still required, but the contractual basis exists.

The Copyright, Designs and Patents Act 1988 Section 9(3) provides that for computer-generated works, copyright belongs to “the person by whom the arrangements necessary for the creation of the work are undertaken.” In practice, this means the employer or employee who operated the AI system — but only where there is no human author. Where an employee provides substantial creative direction, their contribution may generate separate copyright. The practical implication is that AI-generated outputs should be treated as of uncertain IP status, and client contracts should be reviewed for IP warranties that may need updating.

Employment contracts and AI use

Most employment contracts impose obligations of confidentiality, quality, and professional standards that AI use can engage. An employee who submits AI-generated work without adequate review may be in breach of their employment contract if it falls below the quality standard required. Updating contracts — or at minimum issuing a policy that clarifies AI use obligations — is important before disciplinary issues arise.

Sector-specific regulation adds further obligations

In financial services, FCA-regulated employees using AI for advice or client communications must ensure AI outputs meet the Consumer Duty and relevant conduct standards. In legal services, the SRA has issued guidance on AI and professional obligations. In healthcare, patient data is special category data under UK GDPR — the bar for AI processing is significantly higher. Check your sector regulator’s current position before deployment.

What a Generative AI Workplace Policy Needs to Cover

A generative AI workplace policy does not need to be a lengthy legal document. It needs to address eight practical areas clearly enough that employees understand what is expected.

1. Approved and prohibited tools

The policy should specify which AI tools are approved for use, under what conditions, and which are prohibited. A tiered approach works well: enterprise-licensed tools with DPAs (approved for general use including work documents); consumer tools without DPAs (approved only for non-confidential, non-personal use — brainstorming, general research); prohibited tools for regulated data (any tool without a DPA where the use case involves personal data or confidential client information).

2. Data protection requirements

Explicitly prohibit inputting personal data, confidential client information, commercially sensitive information, or unpublished financial data into any AI tool without an enterprise DPA. Make clear that employees are personally responsible for checking before inputting. Provide specific examples relevant to your organisation’s context — the more concrete the examples, the more the policy changes behaviour.

3. Output quality and verification

Require that all AI-generated outputs are reviewed and verified by a qualified employee before use, sharing, or submission. The employee who uses the AI tool is responsible for the accuracy and quality of the output — not the AI tool. This principle needs to be explicit, because the most common misconception among employees is that AI-generated content is accurate by default.

4. Intellectual property

Address the copyright position for AI-generated work: who the employer considers to own AI outputs created in the course of employment, what disclosure is required when AI is used significantly in client deliverables, and how to handle situations where client contracts contain IP warranties that may be engaged by AI use.

5. Disclosure obligations

Specify when employees must disclose AI use to clients, customers, or regulators. In regulated sectors, the disclosure obligation may be mandatory — the FCA’s Consumer Duty requires transparency about how services are delivered, and advice generated by AI without appropriate disclosure may breach this. Even outside regulated sectors, some clients contractually require disclosure of AI use in deliverables.

6. Training requirements

Make clear that employees must complete AI literacy training before using approved AI tools in their work. The training requirement serves two purposes: it ensures employees have the competence to use AI tools safely, and it creates a documented governance trail showing the employer took reasonable steps to prevent misuse.

7. Incident reporting

Provide a clear process for reporting potential AI-related incidents: suspected personal data breaches (mandatory to report under UK GDPR within 72 hours if likely to result in risk to individuals), AI output errors that have reached clients or regulators, and suspected misuse by other employees.

8. Policy review cycle

Generative AI capabilities and the regulatory landscape are changing rapidly. A policy that was accurate in 2024 may be partially obsolete in 2026. Build an annual review cycle into the policy itself, with a named owner responsible for keeping it current.

Training Employees to Use Generative AI Safely

A policy without training is ineffective. Employees need to develop four competencies to use generative AI safely in a workplace context.

Understanding what generative AI is and is not. Employees who understand that AI tools predict plausible text rather than retrieve verified facts are much better positioned to apply appropriate scepticism to AI outputs. Training that explains how large language models work — at a conceptual rather than technical level — changes the relationship employees have with AI outputs.

Data protection in practice. Abstract GDPR training does not change behaviour. What changes behaviour is specific guidance about the tools employees actually use, concrete examples of what constitutes personal data in their context, and clear procedures for the situations they actually encounter. Role-specific scenarios are significantly more effective than generic awareness training.

Output verification skills. Employees need to develop the judgment to know when to verify an AI output, how to verify it efficiently, and how to identify the failure modes specific to the AI tools they use. This requires practice with the actual tools, not just instruction.

Disclosure and IP awareness. Employees in client-facing roles need specific guidance on when to disclose AI use, how to handle IP queries, and how to check whether client contracts affect AI-generated deliverables.

The EU AI Act Article 4 angle

Article 4 of the EU AI Act requires organisations deploying AI systems to ensure their staff have sufficient AI literacy to operate those systems safely. For UK employers with EU-connected operations or EU data subjects, this creates a cross-border AI literacy obligation on top of UK GDPR requirements. Building AI literacy into mandatory training is the most defensible compliance position.

Common Policy Mistakes

Blanket bans. Prohibiting all AI use is counterproductive and unenforceable. Employees will continue using AI tools regardless, but covertly — removing the employer’s ability to manage the risk or build appropriate capability. A policy that channels AI use into approved, safe tools with clear conditions is more effective than one that attempts to prohibit it entirely.

Policy without training. Publishing a policy without accompanying training means employees do not understand what is required of them. The ICO considers training a component of reasonable security measures under UK GDPR — a policy alone does not satisfy the obligation.

No review cycle. AI capabilities are evolving at pace. A policy without a review cycle will quickly become obsolete — approving tools that are no longer safe, prohibiting approaches that are now standard practice, and failing to address new risk categories as they emerge.

Generative AI Policy Implementation Checklist

  • DPIAs completed for all AI tools that process personal data
  • Data processing agreements in place with all enterprise AI tool providers
  • Consumer AI tools assessed: personal data input prohibited or restricted
  • Policy published and communicated to all employees
  • Approved tools list maintained and current
  • Output verification requirement explicitly stated in policy
  • IP ownership position documented
  • Disclosure obligations stated for regulated/client-facing roles
  • AI literacy training completed by all employees using approved tools
  • Incident reporting process documented and tested
  • Client contracts reviewed for IP warranty clauses affected by AI
  • Employment contracts reviewed for clauses engaged by AI use
  • Sector regulator guidance reviewed and incorporated
  • Policy review scheduled annually with named owner
  • Policy acknowledgement recorded for all employees

Train your workforce to use AI safely and productively

TIQPlus supports AI literacy programmes that go beyond awareness — giving employees the practical competence to use AI tools safely, verify outputs, and protect data. See how it works.

Book a demo

Sources & further reading

Share this guide