How to use AI as a thinking partner, not just a writing tool
Most people use AI to write things. The more powerful application is using it to sharpen your thinking, stress-test your ideas, and improve your decisions.
Most people who use AI regularly have settled into a comfortable but limiting pattern. They open the chat window when they need something written: a draft email, a summary, a social post. The AI produces text, they edit it, and they move on. That is a legitimate use. But it is also roughly like buying a high-performance car and only driving it on short trips around the car park.
The more interesting application of AI is not as a writing tool at all. It is as a thinking partner: something you use to stress-test your own ideas before you commit to them, to surface counterarguments before someone else does, or to work through a decision when you would benefit from a second view but there is nobody available to give you one.
The distinction matters because the output of thinking is different from the output of writing. When you ask AI to write something, you are asking it to produce a finished artefact. When you ask it to think with you, the goal is clarity in your own head. The AI does not need to produce the answer. It just needs to help you get there.
The simplest version of this is stress-testing an idea. Say you have a proposal you are about to present to a client, or a decision you are about to make about your business. Rather than asking AI to help you write the pitch or the email, ask it to argue against you. Tell it the idea and ask: what are the strongest objections someone might raise to this? What am I missing? Where is this weakest? The quality of the pushback will vary, but the process of reading the objections, even the obvious ones, tends to sharpen your thinking in ways that reading your own draft again does not.
This works because AI is genuinely good at generating perspectives. It has been trained on an enormous range of human thought and argument, which means it can construct a reasonably thorough countercase to almost any position faster than you can. It is not always right, and it will sometimes produce weak objections dressed up as strong ones. But even a mediocre counterargument forces you to either find an answer or acknowledge a gap. Both outcomes are useful.
A more structured version of the same thing is devil’s advocate mode. You give AI a decision you are weighing, explain which direction you are leaning, and ask it to make the strongest possible case for the opposite choice. This is different from asking it for a balanced analysis, which tends to produce anodyne summaries of both sides. You want it to be adversarial. “Make me doubt myself” is a reasonable instruction. So is “assume I have got this completely wrong and explain why.”
The technique works especially well for decisions where you suspect you might be rationalising rather than reasoning. If you have already made up your mind and are looking for justification rather than analysis, AI will not stop you from doing that. But it can at least surface the best arguments on the other side, and sometimes that is enough to introduce the doubt that changes your mind.
AI is also useful as a sounding board for thinking that is not yet formed. Many people have had the experience of explaining a problem to someone and finding, halfway through the explanation, that they have worked out the answer themselves. The act of articulating the problem forces clarity. You can get a version of that with AI. Describe the situation, the constraints, what you are trying to figure out, and ask it to reflect back what it hears as the core tension. Often the summary will be imperfect or miss the point, but engaging with why it is wrong is itself clarifying.
This is more useful than it might sound in situations where you are under time pressure and cannot access the people you would normally talk to. A founder working through a pricing decision late at night has limited options. A manager trying to think through how to handle a difficult conversation before a morning meeting is in a similar position. AI does not replace a trusted colleague with relevant experience, but it is considerably better than thinking in circles on your own.
One practical approach worth developing is the habit of giving AI full context before asking it anything. The more it understands about the situation, the more useful the thinking support becomes. That means explaining not just what you are trying to decide but why it matters, what constraints you are working within, what you already know, and what you are uncertain about. The extra time it takes to write a thorough brief is usually returned several times over in the quality of what comes back.
There are limits to this. AI does not have good intuitions about specific people or organisations because it has no access to the particulars of your situation beyond what you tell it. It cannot tell you how your board will respond to a proposal, or whether a specific customer is likely to walk. It also has no stake in the outcome, which means it will produce a clean analysis that misses the messy human factors you may not have told it about. You still need to exercise your own judgement. The tool is for sharpening that judgement, not replacing it.
The other caveat is that AI is better at widening the space of considerations than at telling you which one matters most. That prioritisation step, the moment where you look at everything you have learned and decide what to do, remains yours. What AI can do is make sure you arrive at that moment having thought harder than you would have done alone.
That is what makes it more interesting as a thinking tool than as a writing tool. Writing is about producing an output. Thinking is about improving the quality of your decisions. The second of those is considerably more valuable, even if it is less visible.