AI Explained

What is a prompt, and why does wording change the answer?

Why the way you phrase a question to an AI matters far more than most people realise, and how small changes in wording produce completely different answers.

There is something that takes most people a while to grasp about AI: it is not a search engine. Typing a question into ChatGPT, Claude or Gemini is not the same as typing a query into Google. The way you phrase something does not nudge the results slightly. It determines the entire shape of the answer. Two people asking about the same subject can walk away with completely different responses depending on how they framed the question.

This is what a prompt is. It is whatever you type into the chat window. Your question, your instruction, your request. But the word prompt undersells what it actually is. Think of it less as a question and more as a brief. When you commission a piece of writing, a design, or a piece of advice, the quality of what comes back depends heavily on how well you explained what you needed. AI works the same way.

The model has no memory of you between conversations, no idea what you already know, and no way to guess what you actually want unless you tell it. It takes what you write at face value and responds to that. So if the input is vague, the output tends to be generic. If the input is specific, the output tends to be useful.

To see this in practice, consider something simple: you want help writing an email to a client who has not paid an invoice.

If you type “write me an email about an unpaid invoice,” you will get a polite, bland placeholder. It will mention a payment is overdue, thank the client for their attention, and include a line about getting in touch with any questions. It will be grammatically fine and entirely forgettable.

Now try this instead: “Write a firm but professional email to a client who is four weeks overdue on a £2,400 invoice. We have sent two reminders already. Tone should be direct — I want to make clear this needs resolving this week — but not aggressive. Keep it under 150 words.”

The output is a completely different thing. It is specific to the amount, the delay, and the prior reminders. It has the tone you actually wanted. It is short because you said you wanted it short. The model did not become smarter between the two prompts. You gave it more to work with.

This is the single biggest shift in how people go from frustrated AI users to effective ones. Not learning technical techniques, not using special commands. Just giving the model context, purpose, and format.

Context is about telling the AI who you are and what situation you are in. “I run a small accountancy practice and I am writing guidance for clients about Making Tax Digital” produces a far more targeted response than “explain Making Tax Digital.” The model tailors the complexity of the explanation, the assumed knowledge, and the examples it reaches for. Without that context, it defaults to something in the middle that often satisfies nobody.

Purpose is about telling the AI what the output is actually for. There is a difference between asking for a summary you will read yourself and asking for a summary you will share with a board of directors. There is a difference between wanting bullet points to jog your own memory and wanting a paragraph you can paste into a report. The model cannot tell which one you need. You have to say.

Format matters more than most people realise. If you ask for information without specifying how you want it, the model will make a choice. It tends to default to lists and headers, which is fine for some purposes and completely wrong for others. If you want flowing prose, say so. If you want a table, ask for a table. If you want three short paragraphs, that is a perfectly reasonable instruction and the model will follow it.

There is also the question of role. Asking an AI model to respond as though it were a particular kind of expert often improves the output substantially. Not because it unlocks hidden knowledge, but because it changes the framing of the response. “Explain what a pension is” produces one kind of answer. “Explain what a pension is, as though you are a financial adviser speaking to someone who has just started their first job” produces something with a different register, different examples, and a more useful sense of what the person in front of you actually needs to hear.

Length is worth mentioning because it is so often ignored. By default, most AI models err on the side of thoroughness. They will write a lot. This is not always what you want. Asking for brevity works. “In three sentences” or “under 100 words” or “give me the key point only” all produce shorter responses. If you want depth, you can ask for that too: “go into detail on the trade-offs” or “I want a thorough breakdown” signals to the model that you have time and appetite for a longer answer.

One of the most useful habits to develop is iteration. A lot of people type a prompt, look at the answer, decide it is not quite right, and give up or start again from scratch. Instead, treat the first response as a draft and carry on in the same conversation. “That is too formal, can you make it sound more conversational?” or “The first paragraph is good but the second half loses the thread” are perfectly valid follow-up prompts. The model can revise and refine. Using it as a conversation rather than a one-shot query makes an enormous practical difference.

None of this requires learning a new vocabulary or studying techniques with names. It is the same thing you would do when asking a knowledgeable colleague for help. You would not walk up to a colleague and say “invoice email.” You would explain the situation. The AI is the same. It responds to the quality of the brief it receives.

Getting better results from AI, in most cases, is not about the AI at all. It is about getting clearer on what you actually want before you type.