Using AI to write and edit: what it’s good at and what it gets wrong
AI can draft and edit faster than any human. But AI writing has tells, and publishing it unedited is a mistake. Here's what it's actually good at.
AI writing tools have changed how a lot of people work with text. But they do some things well and others badly, and knowing the difference is what separates useful output from generic noise.
There is a version of AI writing that gets covered constantly. You describe a task, hit a button, and something coherent arrives fully formed. That is real. But treating that output as finished work is where most people go wrong.
AI is genuinely useful for writing. It has changed how many people approach drafting, editing, and proofreading. It has also introduced a specific problem nobody was quite prepared for: output that sounds professional but reads identically to every other AI-generated piece on the internet.
Getting value from AI as a writing tool means understanding what it does well and what it consistently gets wrong. The gap between those two things is larger than most promotional content suggests.
What AI writing tools are actually good at
Drafting is the clearest use case. If you know what you want to say, AI helps you say it faster. You give it context, a rough structure, and a topic. It produces a working draft in seconds.
For most people, the blank page is the hardest part of any piece of writing. AI removes that problem. What arrives is usually coherent and covers the main points. It serves as a starting point for something better, not a finished product.
Editing is where AI earns its keep most cleanly. Paste in your own draft and ask it to tighten the prose, remove filler, or flag passive voice. It does those things reliably. It is better than most people at spotting places where you have been verbose without adding value.
Proofreading is similar. AI catches grammar errors, inconsistent capitalisation, and missing punctuation. Not perfectly, but as a first filter it is faster and more thorough than most people are with their own work. Tools like Claude and ChatGPT both do a credible job at this level.
Rewriting is a use case that does not get enough attention. If you have content that is outdated, too formal, or written in the wrong tone, AI can restructure it quickly. Give it the original and tell it what you want. This is useful for updating old posts, converting technical writing for a general audience, or condensing a long piece into a short summary.
The problems you need to know about
AI drafts have a recognisable texture. They tend to open the same way. They use the same transitional phrases. They hedge constantly and reach for adjectives like “nuanced” and phrases like “in today’s rapidly evolving landscape”.
A reader who has seen a lot of AI content will spot it within the first two sentences. That recognition is not neutral. It signals the content was not thought about carefully, and it undermines whatever you are trying to say.
The fix requires effort. You have to edit the draft, not lightly. You need to pull it apart, add your own observations, and change the phrasing wherever it feels generic. Remove any sentence designed to sound authoritative rather than be useful.
Good AI-assisted writing is a collaboration where the human does the last third of the work. It is not a copy-paste of whatever the model produced. The tool does the structure. You provide the perspective and the judgement.
Factual accuracy is also a real concern. AI tools generate confident-sounding text whether or not the underlying information is correct. For anything involving numbers, regulations, or specific events, you need to verify the claims independently. This is not optional.
You can read more about why AI gets things wrong even when it sounds confident and why that pattern is so hard to detect. Knowing this before you rely on AI-generated content is what separates useful output from embarrassing corrections later.
Which tools are worth using
The tools worth using are fewer than the marketing suggests. For most writing tasks, Claude and ChatGPT are the two that perform consistently well. Claude tends to produce cleaner prose and follows specific style instructions more reliably.
ChatGPT is comfortable working with longer documents and handles structured tasks reliably. Both have free tiers that cover most everyday use. Gemini is capable but does not offer anything meaningfully different for most writing tasks.
The wave of writing-specific AI tools that appeared a few years ago, including Jasper and Copy.ai, have not aged particularly well. They tend to produce filler prose that reads as an immediate AI signal. For most users, starting with a general-purpose tool and giving it clear, specific instructions will outperform a niche writing product.
Anthropic publishes clear documentation on what Claude can and cannot do as a writing assistant, which is worth reading if you are deciding which tool to use. The Claude overview page covers its approach to following instructions and maintaining consistency across longer documents.
How to get the best results
Specificity is the most important variable. The more clearly you explain what you want, the better the output. “Write a blog post about AI” produces something generic. “Write an 800-word explainer for a general UK audience about how AI writing tools work in practice, using plain English and no jargon” produces something usable.
Breaking tasks into smaller steps also helps. Ask for a structure first, review it, then ask for the content section by section. This gives you more control at each stage and makes it easier to correct the direction before you are deep into the draft.
Tone instructions work well. If you have an existing piece of writing you are happy with, share a sample and ask the model to match the style. This reduces the generic texture significantly. It does not eliminate it, but it narrows the gap between what the model produces and what you would write yourself.
Understanding how prompts work and why wording changes the answer is the underlying skill. Getting consistently useful output from any AI writing tool comes down to how well you can communicate what you actually want.
The dependency risk
If you stop writing your own drafts entirely and let AI handle all of it, your own writing gets worse over time. The skill atrophies. You lose the ability to recognise weak AI output because you are no longer doing the underlying work yourself.
Occasional use is fine. Complete substitution is a different matter. The people who use AI writing tools most effectively tend to be competent writers already. They use the technology to move faster, not to avoid developing the skill.
What this means for you
Readers are getting better at recognising AI text. That matters if your credibility depends on your content. Publishing AI-generated text without editing it is a shortcut that costs more than it saves.
The output might look polished on the surface. But readers who pay attention notice the absence of a specific human point of view. That is what makes writing worth reading in the first place.
AI tools accelerate the work. They reduce the time it takes to produce a first draft and the effort involved in catching errors. They do not replace the judgement that makes the work useful, or the voice that makes it worth reading.