AI Explained

What information should you never put into an AI tool?

Every time you open a chat window, you are deciding what to share. Here is what should stay out of AI tools, and why the risk is more real than most people realise.

Every time you open a chat window and start typing, you are making a decision about what to share. For most questions, that decision is trivial. But when the question involves real names, real money, or real confidential information, the calculus changes completely, and most people are not thinking about it carefully enough.

AI tools have made it genuinely easy to get useful help with almost anything. That ease is precisely what creates the risk. The lower the friction to sharing something, the less likely you are to pause and ask whether you should.

So here is the honest answer to what should stay out of any AI tool, why the risks are real rather than theoretical, and what you can actually do without giving up the productivity benefits entirely.

The first category is personal data about other people. This one catches a lot of people out because they are focused on their own task rather than thinking about the individuals mentioned in it. Pasting in a customer’s complaint email, drafting a message that includes a client’s medical situation, asking for help with an HR issue using an employee’s real name and performance details: all of these push someone else’s personal information into a system they never consented to. Under UK GDPR, you have legal obligations around how you process personal data, and those obligations do not disappear because you are using a third-party tool. Consumer-tier AI products are particularly exposed here. Most free or standard subscription tiers use conversation data to improve their models, which means what you type may be reviewed by humans at the company or used in future training runs. The person whose details you pasted in has no idea their information is now potentially part of an AI training dataset.

The second category is anything covered by a non-disclosure agreement or genuine trade secret. Law firms do this. Consultants do this. Employees at large companies do this constantly without thinking about it. The document you are trying to summarise, the contract you want to check for errors, the strategic plan you need help structuring. If it is covered by an NDA or marked confidential by your employer. The act of uploading it to a consumer AI tool is almost certainly a breach of that obligation. This is not a technicality. It is the kind of thing that has ended careers and generated lawsuits. Enterprise-grade versions of these tools offer stronger data handling guarantees, but even then you need to verify what those guarantees actually cover before assuming you are protected.

Financial account details and passwords sit in a third category. These should never go into an AI tool under any circumstances, full stop. There is no legitimate use case that requires you to share your banking credentials, account numbers, or passwords with an AI assistant. If you ever feel like you need to, something has gone wrong upstream in how you are approaching the problem, and you should back up and rethink the task rather than proceeding. This also applies to the credentials of others, including your clients or family members.

Medical information deserves special mention because it occupies a grey area in practice even though the principle is straightforward. It is common to ask an AI to help interpret a test result, draft a message to a GP, or understand a diagnosis. That is understandable. But the specific details you share, especially if they are detailed enough to identify you or someone else, carry significant privacy implications. Health data is a special category under UK GDPR, meaning it gets heightened protection and heightened scrutiny. If you want to use AI for health-related questions, keep them general rather than specific, and do not include names, NHS numbers, or details that could be linked back to a real person.

Intellectual property is another area where the risks are underappreciated. Feeding a substantial portion of an original work into an AI tool, whether that is source code, a manuscript, proprietary research, or creative work you plan to commercialise, raises questions about ownership and disclosure that are still being worked out in law. The honest position is that no one can tell you with certainty what happens to that input or how it interacts with the tool’s training processes over time. If the work has commercial value or is covered by copyright you care about, caution is warranted.

The enterprise tier question comes up a lot, and it is worth addressing directly. Paid business tiers from providers like OpenAI, Anthropic and Google typically include opt-outs from training data use, stronger contractual guarantees about data handling, and sometimes zero data retention commitments. That is meaningfully better than a free consumer account. But “better” is not the same as “no risk.” Even enterprise agreements have limits, exclusions and conditions. You still need to read what you are agreeing to, check what your organisation’s legal team says, and apply judgement rather than treating a business subscription as a blanket licence to share anything.

The practical approach most people can adopt is surprisingly simple. Before pasting anything into an AI tool, ask yourself two questions. First, would the person this information is about be comfortable knowing it was shared here? Second, if this conversation were made public tomorrow, would it cause a problem for you, your employer, or anyone else? If the answer to either question is uncertain, anonymise the details before you type them. Replace real names with placeholders. Swap specific financial figures for approximate ranges. Remove identifying information from documents before uploading them. The AI can still do the task you need it to do, and you have not taken on unnecessary risk in the process.

The broader point is that AI tools are not vaults. They are products built and operated by companies, subject to the same commercial pressures, data practices and legal obligations as any other software service. Treating them with the same thoughtfulness you would apply to any other third party is not paranoia. It is just sensible practice. The productivity gains are real and available without compromising information that was never yours to share.