AI Explained

What is shadow AI, and why does it matter at work?

Employees are using AI tools at work without IT approval. Here is what shadow AI is, why it happens, and what the real data risks are under UK GDPR.

Most organisations have an AI policy. Most employees have already ignored it. The gap between the two is where shadow AI lives, and it is growing fast.

Shadow AI refers to the use of artificial intelligence tools at work without the knowledge or approval of IT, security, or management. It happens when an employee opens ChatGPT to rewrite a report, pastes customer data into an AI summariser, or uses a browser plugin powered by a large language model to speed up their inbox. None of it goes through procurement. None of it is logged. And very little of it is understood by the people responsible for keeping company data safe.

The phenomenon is not new. Companies went through the same cycle with personal email, then cloud storage, then WhatsApp. Employees find tools that make their jobs easier and use them, regardless of what the policy handbook says. AI has simply accelerated the pattern because the tools are so capable, so freely available, and so obviously useful that waiting for IT sign-off feels unreasonable. When your AI assistant can condense a two-hour meeting into a five-point summary in thirty seconds, the temptation to just use it is hard to resist.

The scale of the problem is considerable. Research published in 2024 by figures including the CIPD and several data security consultancies suggested that a significant proportion of UK office workers were using generative AI tools that their employers had not approved, and that a meaningful share of those workers had pasted work-related information into those tools. The number has only grown since. The lag between employee behaviour and organisational policy is measured in years, not months.

Understanding why shadow AI happens is important before deciding what to do about it. The primary driver is straightforward: the tools work, and the approved alternatives often do not. Enterprise software is frequently slow, clunky, and locked behind lengthy IT procurement cycles. A commercial AI assistant, by contrast, is available immediately, costs nothing or a few pounds a month on a personal card, and does in seconds what used to take hours. When there is that kind of productivity gap, people fill it themselves.

There is also an awareness problem. Many employees who use AI tools at work genuinely do not realise they are doing something that could be a compliance issue. They think of ChatGPT the way they think of Google: a tool on the internet that they type questions into. The idea that pasting a client contract into a chatbot window might violate data protection law, expose trade secrets, or breach a supplier agreement has not occurred to them, because nobody told them it could.

The data risks are real and worth understanding clearly. When an employee pastes information into a consumer AI tool, that information is transmitted to the servers of a third-party company, processed using that company’s infrastructure, and in some cases used to improve the model further, depending on the tool’s terms of service. For most consumer-facing AI products, the default setting is that inputs can be used for training unless the user actively opts out, which most do not. This means confidential client information, internal strategy documents, HR data, and commercially sensitive communications can end up contributing to a model that is also used by competitors.

From a UK GDPR perspective, this creates a genuine problem. The regulation requires that personal data is processed lawfully, that employees and customers are informed about how their data is used, and that data is not transferred outside the UK without appropriate safeguards in place. Most consumer AI tools are operated by US companies and process data on US infrastructure. Using them to handle personal data about UK clients or employees without a proper legal basis, processor agreement, or data transfer mechanism is likely to breach the UK GDPR, even if the employee had no idea that was what they were doing. The ICO has made clear that ignorance of the rules is not a defence.

Beyond data protection, there are other risks that organisations tend to underestimate. AI tools can and do produce incorrect information, and employees acting on AI output without verification can make consequential errors. If an AI tool is used to draft a contract clause, summarise a legal document, or calculate a financial figure, and the output is wrong, the organisation bears the consequences. Shadow AI removes the possibility of meaningful oversight because the work simply does not appear in any system the organisation controls.

The instinctive response from many IT and legal teams is to ban consumer AI tools outright. Some organisations have done this, blocking access at the network level and issuing stern policies. The evidence suggests this approach does not work particularly well. Employees use mobile data. They use personal devices. They use the tools at home and then bring the outputs into work. Prohibition creates the illusion of compliance without achieving it, and it tends to increase rather than reduce the secrecy around AI use, which makes the risk harder to manage.

A more effective approach is to treat shadow AI as a signal rather than a transgression. If employees are using consumer tools in large numbers, that tells you something important: there is a genuine need that the organisation is not meeting. The answer is to meet it. That means giving people approved tools with proper security configurations, data processing agreements, and opt-out settings for training. Microsoft 365 Copilot, Google Workspace with Gemini, and enterprise tiers of major AI platforms all offer features comparable to consumer tools within an architecture that legal and IT teams can actually govern. They are not free, but they are considerably less expensive than an ICO investigation or a data breach.

Alongside tool provision, the other essential step is education. Most shadow AI use is not malicious. It is well-intentioned and uninformed. Employees need to understand, in plain terms, what the risk actually is when they paste company data into an unmanaged tool: not a vague threat of being in trouble, but a concrete explanation of where the data goes, what could happen to it, and what that means for the client, the company, and potentially for them. Policy documents locked behind the intranet do not achieve this. Conversations, training sessions, and accessible guidance do.

Shadow AI is not going away. The tools are too useful and too accessible for prohibition to hold. The organisations that manage it well will be the ones that get ahead of it: understanding what their employees actually need, providing tools that meet that need safely, and building the kind of informed culture where people know enough to make better decisions. That is harder than writing a policy, but it is the only approach that works.