AI Explained

When not to use AI

AI is genuinely useful for many tasks. But knowing when not to reach for it matters just as much as knowing how to use it well.

The hype around artificial intelligence is relentless. It can write, summarise, translate, code, design and analyse. For a growing number of tasks, it does these things well enough to be genuinely useful. But for every category where AI adds real value, there is another where using it makes things worse, not better. The honest question is not just what AI can do, but when you should not reach for it.

Getting this wrong has real costs. Not catastrophic ones in most cases, but the kind that quietly accumulate: a client relationship slightly damaged by an email that sounded too polished and not quite right, a decision made on the basis of a confident-sounding AI answer that turned out to be wrong, a piece of work that looks like everybody else’s because the same tool produced all of it.

The most obvious category where AI falls short is anything where accuracy matters more than speed. AI models produce plausible-sounding text. They are very good at it. But plausible is not the same as accurate, and the model has no way to tell you when it is guessing. Legal documents, medical decisions, financial calculations and technical specifications are all areas where a confident wrong answer is worse than no answer at all. If you are drafting a contract, checking drug interactions, calculating structural loads or preparing accounts, AI can help you research and draft, but it cannot be the final authority. The professional at the end of the process is still essential, and cutting them out because the AI sounded certain is where things go wrong.

High-stakes personal decisions belong in this category too. AI is a poor substitute for a doctor, a solicitor or a financial adviser, not because those professionals are infallible, but because they carry accountability that the model does not. The model cannot be held responsible for what it tells you. It will not lose its licence, face a complaint or pay damages. You are on your own if you act on bad advice, and bad advice from AI tends to arrive with complete confidence and zero caveats.

The second major category is anything requiring original judgement in a genuinely contested situation. AI is good at synthesising existing ideas and presenting them coherently. It is not good at taking a position that nobody has taken before, or at exercising genuine moral judgement in a messy real-world situation. Ask it to help you think through a difficult problem and it will give you a useful structured response. Ask it to tell you what you should actually do when the stakes are personal and the right answer is unclear, and you will get something that sounds helpful but lacks the thing that matters most: accountability to you and your specific situation.

This applies to creative work in a particular way. AI can produce writing, images and music that is technically competent and, in many cases, superficially impressive. What it cannot do is make the kind of decision that gives creative work its character. The choices that define a good piece of writing, a distinctive visual identity or a piece of music that does not sound like everything else are rooted in individual perspective and experience. AI has neither. It has patterns drawn from what already exists. Using it for a first draft is one thing. Using it to make the creative decisions is something quite different, and the result tends to look and read exactly like it is: content that came from the average of everything published before.

The third category covers tasks where physical presence, context and relationship genuinely matter. A doctor needs to examine a patient. A manager dealing with a serious personnel issue needs to be in the room. A negotiation that depends on reading the other person requires a human being. AI cannot replace any of these, not because the technology is insufficient, but because the value being delivered is not informational. It is relational. It is about trust, presence and the kind of communication that happens in person and cannot be replicated through text generation.

Customer-facing situations sit in an uncomfortable middle ground here. AI tools for customer service have improved considerably, and for straightforward queries they can work well. But when something has gone wrong and the customer is frustrated, the value of talking to an actual person is significant. An AI-generated response to a serious complaint can feel dismissive even when the words are perfectly measured, because what the customer wants is to feel heard by a human being, and that is not something a language model can genuinely provide.

There is also a case for stepping back from AI when the act of doing something yourself is most of the point. Learning a language, writing a letter to someone you care about, working through a difficult problem to build your own understanding of it. AI can do all of these things for you. But if you hand them over, you do not get the outcome that actually mattered, which was not the finished product but the process of producing it. Some things are worth the effort precisely because they are effortful.

None of this is an argument against using AI. The tools are genuinely useful, and the people who learn to use them well will have a real advantage over those who do not. But that advantage comes from knowing what the tool is good for, not from reaching for it by default. The people who get the most from AI are the ones who have a clear sense of where it helps and where it gets in the way. The hype makes that harder to develop, because it implies the answer is always more AI. Sometimes it is not.