AI privacy and security: what small businesses need to know
James
Co-founder of Smash Your AI - 18 years in education, now helping businesses and individuals get real results from AI.
Last month I ran a workshop for a group of small business owners in Newcastle. I asked what was stopping them from using AI. One woman, who runs an accounting practice, put her hand up and said: "I do not trust it. What if it reads my clients' financial data and leaks it somewhere?"
She is not alone. In almost every training session I run, privacy is the number one concern. Not how to write better prompts. Not which tool to pick. Privacy. People want to know: is AI safe to use with my business data?
The honest answer is: it depends on how you use it. And most people are using it in ways that are riskier than they need to be, simply because nobody has explained the basics.
This guide will fix that. No jargon. No legal waffle. Just the practical stuff you actually need to know.
The big question: is AI reading my data?
Let us tackle this head-on. When you type something into ChatGPT, Claude, or Gemini, what actually happens to it?
Here is the simple version. AI tools process your input to generate a response. That part is obvious. The question is: do they also store your input and use it to train their models?
The answer depends on three things:
- Which tool you are using (ChatGPT, Claude, Gemini, etc.)
- Which plan you are on (free vs paid makes a big difference)
- Your settings (most tools let you opt out of training)
On free plans, most AI tools can use your conversations to improve their models. This does not mean a human is reading your chats. It means your input might be fed into future training data. Your exact words probably will not appear in someone else's conversation, but your data is not fully private either.
On paid plans, the picture is much better. Most paid plans do not use your data for training by default. And even on free plans, you can usually turn training off in settings.
Privacy comparison: ChatGPT vs Claude vs Gemini
Here is a side-by-side look at how the three most popular AI tools handle your data. This is accurate as of March 2026.
| Feature | ChatGPT (OpenAI) | Claude (Anthropic) | Gemini (Google) |
|---|---|---|---|
| Free plan trains on your data? | Yes, by default | No, not by default | Yes, by default |
| Paid plan trains on your data? | No (Plus/Team/Enterprise) | No (Pro/Team) | No (Advanced/Workspace) |
| Can you opt out of training? | Yes, in settings | Already opted out | Yes, in activity settings |
| Data retention | 30 days (then deleted), or kept if training is on | 90 days for safety, not used for training | Up to 18 months for free plan |
| Team/business plan available? | Yes (Team and Enterprise) | Yes (Team and Enterprise) | Yes (Workspace/Enterprise) |
| Best for privacy? | Good on paid plans | Strong by default | Good on paid plans |
The key takeaway: if you are on a free plan, your data is probably being used for training unless you change your settings. Paid plans are significantly more private across the board.
What you should NEVER put into AI
Regardless of which tool or plan you are on, some things should never go into an AI chat. Full stop.
Never share these with AI tools
- Client personal data — names, addresses, phone numbers, email addresses
- Financial information — bank details, account numbers, invoices with real figures
- Passwords or API keys — never paste credentials into a chat
- Medical or health records — yours or anyone else's
- Confidential contracts — full agreements with named parties and sensitive terms
- Employee personal details — HR records, salary information, disciplinary notes
- Customer databases — even partial exports with identifying information
I learned this lesson early. When I first started using AI for work, I nearly pasted a full client email into ChatGPT to help me draft a reply. It had the client's name, their company details, and a financial query. I caught myself, but it made me realise how easy it is to share things without thinking.
What is perfectly safe to share
AI is still incredibly useful. You just need to be smart about what you put in. Here is what is completely fine.
Safe to share with AI tools
- General business questions — "How do I write a returns policy?"
- Generic content — blog posts, social media ideas, marketing copy
- Your own writing — for proofreading, rewriting, or improving
- Public information — anything already on your website or published
- Anonymised data — numbers and trends with all names removed
- Templates and frameworks — "Create a project plan template for a 3-month campaign"
- Learning and research — "Explain how corporation tax works for a sole trader"
The rule is simple: if you would not write it on a whiteboard in a shared office, do not paste it into AI.
The anonymous rewrite technique
This is something I teach in every workshop and it is probably the single most useful privacy trick.
Sometimes you need AI to help with something that involves real data. Maybe you want it to draft a tricky email to a client. Or help you analyse some financial figures. The solution is simple: strip out all identifying information before you paste it in.
Unsafe: pasting the real email
"Help me reply to this email from Sarah Thompson at Acme Accounting Ltd. She says their invoice #4521 for £12,450 is overdue and they are considering legal action. Their account number is 45-221-B."
Safe: anonymised version
"Help me reply to an email from a client at an accounting firm. They say their invoice is overdue and they are considering legal action. I want to be professional but firm that we are processing it. Draft a reply."
Same task. Same quality of output. Zero risk. The AI does not need the real names, numbers, or account details to write a good reply. It just needs the situation.
Get into the habit of asking yourself: does the AI actually need this specific detail, or can I describe it generically? Nine times out of ten, generic works perfectly.
How to turn off training on your data
If you are using a free plan and want to keep your data private, here is how to opt out on each platform.
ChatGPT
- Click your profile icon (bottom left)
- Go to Settings
- Click Data controls
- Turn off "Improve the model for everyone"
Note: this disables chat history too on free plans. Paid plans keep history regardless.
Claude
Claude does not use your conversations for training by default, on any plan. No action needed. If you want to double-check, go to Settings > Privacy and confirm the training toggle is off.
Gemini
- Go to myactivity.google.com
- Find Gemini Apps Activity
- Turn it off
- You can also delete past activity from here
This takes about two minutes and it is the single most impactful thing you can do for your AI privacy right now. If you do nothing else after reading this article, go and do this.
GDPR basics for using AI in your business
GDPR sounds scary. It does not have to be. Here is what you need to know in plain English if you are a UK business using AI tools.
GDPR applies when you process personal data. "Personal data" means anything that can identify a real person: a name, an email address, a phone number, even an IP address. If you are putting personal data into AI tools, GDPR is relevant.
The 4 GDPR rules that matter most for AI
-
Have a lawful basis. You need a legitimate reason to process someone's data. For most business uses of AI, this is "legitimate interests" (i.e. running your business). But you cannot just dump customer data into ChatGPT for fun.
-
Minimise the data. Only share the minimum data needed. This is where the anonymous rewrite technique comes in. If you do not need the person's name for the AI to help you, do not include it.
-
Know where data goes. Most AI providers are US-based. Under UK GDPR, transferring personal data outside the UK requires appropriate safeguards. Paid business plans usually include these safeguards in their terms. Free plans might not.
-
Update your privacy policy. If you are using AI to process customer data (even indirectly), your privacy policy should mention it. A simple line about using AI tools for business operations is usually enough.
The practical takeaway: if you anonymise data before putting it into AI, most GDPR concerns disappear. The simplest way to stay compliant is to not put personal data in at all.
If your business handles a lot of sensitive data (medical, legal, financial), it is worth getting specific advice. But for most small businesses, the approach above covers it.
Creating an AI usage policy for your team
If you have staff using AI (and they probably are, whether you know it or not), you need a simple AI usage policy. It does not have to be a 20-page document. A one-page checklist is enough.
Here is what to include:
AI usage policy checklist
- Which AI tools are approved — list the specific tools staff can use (e.g. ChatGPT Plus, Claude Pro)
- What data must never be shared — client names, financials, passwords, personal data
- The anonymous rewrite rule — always strip identifying info before pasting into AI
- Settings must be correct — training must be turned off on all accounts
- All AI output must be checked — especially facts, figures, and anything client-facing (see our fact-checking guide)
- No client work on free plans — use paid plans with proper data protections
- Report any concerns — if someone accidentally shares sensitive data, who do they tell?
Print this out. Stick it on the wall. Make it part of your onboarding. The biggest risk is not the technology. It is people not knowing the rules.
Why team and enterprise plans matter
If your business is using AI regularly, the difference between a free plan and a team plan is significant.
| Feature | Free / Personal | Team / Business plan |
|---|---|---|
| Data used for training | Often yes (by default) | No |
| Admin controls | None | Manage users, set permissions |
| Data processing agreements | Basic terms only | Full DPA for GDPR compliance |
| SSO / security integrations | No | Yes (enterprise plans) |
| Shared workspace | No | Yes — shared prompts and projects |
| Cost | Free or ~£20/month per person | ~£20-25/month per person |
For a team of five, the upgrade to a business plan might cost an extra £25-50 per month total. That is a tiny price for proper data protection, admin controls, and peace of mind. If you are handling client data of any kind, it is a no-brainer.
Need help setting up AI safely for your team?
Our AI training workshops cover privacy, security, and practical usage — tailored to your business. We help you pick the right tools, configure them securely, and create a usage policy your team will actually follow.
Book a workshopQuick reference: what to share vs what not to share
Keep this table handy. It covers the most common scenarios I see in small businesses.
| Scenario | Safe? | How to do it safely |
|---|---|---|
| "Help me write a blog post about our service" | Yes | Public info, no issues |
| "Draft a reply to this client email" (with full email pasted) | No | Anonymise first — remove names, figures, company |
| "Summarise these meeting notes" | Depends | Fine if notes are internal. Remove names if discussing clients |
| "Analyse this spreadsheet of customer data" | No | Remove all identifying columns first. Keep only the numbers you need |
| "Write a job description for a marketing role" | Yes | No personal data involved |
| "Help me with this HR issue about an employee" | No | Describe the situation generically. "An employee" not "John in sales" |
| "Create a social media calendar for next month" | Yes | Public content planning, no issues |
5 things to do right now
If you have read this far, here is your action list. These five things will cover 90% of your AI privacy risks.
- Turn off training. Go into settings on every AI tool you use and disable "improve the model" or "activity" settings. Two minutes per tool.
- Upgrade to paid plans for work use. If you are using AI for anything involving clients or sensitive data, the £20/month is worth it.
- Adopt the anonymous rewrite habit. Before you paste anything, ask: "Does AI need the real names and numbers?" Usually the answer is no.
- Create a one-page AI policy. Use the checklist above. Share it with your team. Pin it to the wall.
- Audit your current usage. Look back at your recent AI conversations. Did you share anything you should not have? If so, delete those conversations and adjust your approach going forward. Our AI audit guide can help with this.
The bottom line
AI privacy is not as scary as it sounds. The tools are getting better at protecting your data, especially on paid plans. And most of the risk comes from user behaviour, not the technology itself.
The woman from my Newcastle workshop? She came back the following week and told me she had set up Claude Pro for her practice, turned off training, and started using the anonymous rewrite technique for all client-related queries. She said she felt confident for the first time. Her exact words were: "It is just common sense once someone explains it."
She is right. It is common sense. You just need someone to lay it out clearly.
Use AI. Protect your data. Keep it simple.