How to create a simple AI policy for a small business
Most small businesses using AI have no policy about it. Here is a practical guide to setting clear ground rules without needing a legal team.
Most small businesses using AI right now have no formal AI policy about it. Someone signed up for ChatGPT, liked it, told a colleague, and now three people are pasting customer emails into it daily. Nobody made a decision about that. It just happened. That is understandable. AI tools are cheap, genuinely useful, and easy to start using. But using them without any ground rules creates real risks, particularly around data protection and intellectual property. A simple internal document gives everyone on your team clarity about what is allowed, what is not, and why. It also gives you a defensible position if anything ever goes wrong.
Start with what you are already doing
Before writing your AI policy, take a realistic look at how AI is being used in your business today. Which tools are people using? What tasks are they using them for? What information is going into those tools? Most of the time, the honest answer is: more than you think.
Drafting emails, summarising documents, writing social media captions, generating images, transcribing meetings, researching competitors: these are all common AI use cases in small businesses. The information going in might include client names, financial figures, sensitive internal discussions, or material covered by NDAs. Understanding the current situation is not about catching anyone out. It is about knowing what ground your AI policy needs to cover.
This audit does not have to be formal. A short conversation with your team, or a quick internal survey, will surface most of the tools in use. What matters is that you know the actual situation rather than the assumed one. Many business owners who go through this process discover that their team is using three or four AI tools they were not aware of. That information is essential before you write a single line of any policy document.
What you are likely to find is a version of what is sometimes called shadow AI. This means tools being used across the business without any central visibility or approval process. Our post on what shadow AI is and why it matters at work covers how this pattern develops. It also explains why it creates compliance risk before any policy is in place.
The data question: what goes in and what stays out
The biggest legal exposure for a UK small business using AI tools comes from data protection. The UK GDPR gives individuals rights over how their personal data is handled. When you paste personal information into a consumer AI tool, that data is typically processed by a third-party provider. That provider is often based in the United States.
Whether that counts as a data transfer, and what obligations it creates, depends on the tool and its terms of service. Most consumer tiers of ChatGPT, Claude and similar products do not offer the data processing agreements UK GDPR requires. Those agreements are needed when handling personal data in a business context. The Information Commissioner’s Office has published detailed guidance on AI and data protection that is worth reading before finalising your approach.
The practical rule is this. Do not paste personally identifiable information into a consumer AI tool unless the tool’s enterprise tier explicitly covers that use. Names, email addresses, financial details, health information: anything that identifies a real person should stay out of consumer products.
This is not a reason to stop using AI tools. It is a reason to be thoughtful about what goes in. Your AI policy should state this data rule clearly, in plain language. Every person on your team should be able to apply it without reading a legal document first.
Intellectual property: two questions worth asking
If your business produces original work, software, design or writing, there are two IP questions worth addressing in your AI policy. Both relate to what happens when AI is involved in that work.
The first is what happens to content you put into an AI tool. Most enterprise tiers are explicit that your input data is not used to train the model. Consumer tiers are less consistent. If your business uses proprietary code or internal strategy documents as AI inputs, check the tool’s terms. They may give the provider rights to that material.
The second is the status of AI-generated output. The legal position in the UK is not fully settled. Works generated entirely by AI, without meaningful human creative input, currently sit in a grey area when it comes to copyright ownership. If you intend to license or protect AI-assisted content, keep a human genuinely involved in the creative process. That is the safer approach under current UK law.
What your AI policy actually needs to say
A practical AI policy for a small business does not need to be long. It needs to answer a few specific questions clearly enough that any team member can use it as a guide without further help.
Which tools are approved for use? Name them. If you use a paid tier of a product for internal drafting, say that. If you use a specialist tool for a specific purpose, name it. Allowing people to use any tool they find means you have no visibility over what is happening and no way to address problems consistently. Approved tools are a basic element of any workable AI policy.
What information can go in, and what cannot? The clearest approach is to describe categories. Personal data about clients or employees stays out. Documents covered by NDA stay out. Unpublished financial information stays out. Internal communications that would be embarrassing if disclosed stay out. Everything else is generally fine, within common sense. Our post on which tasks to give to AI first covers where AI genuinely adds value and where it tends to fall short.
What is the output used for? Draft copy should be reviewed and edited by a human before it goes to a client or is published externally. AI-generated legal or financial content should never be used without a qualified professional reviewing it. Anything going out under your brand should reflect your actual standards, not a first draft from an AI.
It is also worth being explicit about accuracy. AI tools generate confident-sounding text that is sometimes factually wrong. Your AI policy should make clear that any factual claims in AI-generated content must be checked before publication or submission. This is especially important for anything touching on financial figures, regulatory details, product specifications, or client-specific information. A brief human verification step at the end of the drafting process prevents most problems before they reach anyone outside the business.
Who is accountable? In a small business this is usually the owner or a designated person. Your AI policy should nominate someone accountable. They handle questions about the policy, decide on exceptions, and review the document as the tools evolve.
Keep it short and keep it current
The temptation with any compliance-adjacent document is to make it comprehensive. A comprehensive AI policy for a small business without a legal team is a document nobody will read. A page covering the main categories, the specific tools in use, the data rules, and the output rules is enough to be genuinely useful.
Date it. Review it every six months, or whenever you add a significant new tool. AI products change quickly, their terms change, and the legal landscape around them is still developing. An AI policy that was accurate nine months ago may need updating today. The ICO updates its AI guidance regularly. The EU AI Act is also introducing new obligations in stages through 2026 and 2027, and it affects some UK businesses trading into Europe. Keeping your AI policy current is not a one-off task.
The businesses that will handle this best are not the ones with the longest policies. They are the ones where the people actually using the tools understand the basics and apply them consistently every day. A one-page AI policy covering approved tools, data rules and output standards will do more for your business than a 20-page document. The latter usually sits unread in a shared drive.