If you use AI tools in your work and your clients' information is involved — UK GDPR applies to you.

ChatGPT. Claude. Canva. Mailchimp. Zoom. Fresha. Booksy. If any of these touch real information about real people in your work — you are a data controller under UK GDPR, and that comes with specific obligations.

Most professionals using AI tools in their work have never been told what UK GDPR actually requires of them. Not the headlines — the specific rules. What consent you need in writing. Which AI accounts are unlawful for client data. What records you must keep and what happens if something goes wrong.

Every product in this suite is written for a specific profession — therapists, coaches, personal trainers, aesthetic clinics, hair and beauty professionals, freelance creators, and small businesses. Not adapted from generic compliance advice. Written from scratch, for your work, verified against live sources in April 2026.

Find your profession below. Each guide tells you exactly where you stand and gives you the documents to fix it.

Three things most people using AI in their work don't know

Using a free AI account with client data is almost certainly unlawful.

ChatGPT Free, Claude Personal, Google Gemini. If you are using any of these with information about real clients or customers, you are processing personal data without a Data Processing Agreement in place. Under UK GDPR Article 28, that is unlawful from the first use. Not when something goes wrong. From the first use.

If your staff use AI tools you haven't approved, the breach is yours.

It does not matter that you didn't know. It does not matter that they were trying to be efficient. As the data controller, you are responsible for how personal data is processed across your entire operation — including by people who work for you. An employee pasting client data into a free AI account is your liability, not theirs.

Recording a client session with an AI tool without prior consent may be a criminal offence.

Not a fine. Not a warning. A criminal offence under RIPA 2000. AI transcription tools — Otter.ai, Fireflies, automated Zoom and Teams transcription — capture personal data at the moment of recording. Using them without the knowledge and prior consent of the person being recorded crosses a legal line that most practitioners don't know exists.

These are not edge cases. They are the three most common compliance failures we see — and none of them require bad intent to trigger.

Is this for me?

If you can answer yes… UK GDPR applies

Do you use ChatGPT, Claude, or any other AI tool to help write notes, reports, programmes, or content — and does any of that involve real information about real clients or customers?

Do you use Canva, Mailchimp, or any AI-powered marketing tool with customer data?

Do you use ChatGPT to generate programmes, session plans, or recommendations based on a client's health history or personal information?

Do you use a chatbot on your website or in your customer communications — and does it collect, respond to, or make decisions based on information about real people?

Do you record client calls or sessions using Zoom, Teams, or any AI transcription tool like Otter.ai?

Does your team use AI tools day to day — and do you know which accounts they're using and what data is going into them?

Do you use a booking platform — Fresha, Booksy, Calendly, Acuity — that holds client health information, allergy records, or consultation notes?

Do you take photographs of clients — before and after images, progress photos, portfolio shots — and store or share them digitally?

Do you receive client data from the businesses you work with — customer lists, email audiences, social media data — and use AI tools to work with it?

The Three-Tier Architecture

Fixing this properly requires more than one step. That’s why the suite is built in three layers. Each tier solves a different part of the problem. You can buy them separately — but they are designed to work together.

Tier One

UNDERSTAND

Know what the law requires and why it applies to you

Tier Two

OPERATE

Build the legal infrastructure to run a defensible operation

Tier Three   

PROTECT

Stay ahead of the risks most businesses haven’t seen yet

The Problem This Suite Solves

AI tools are now part of everyday professional practice. Most people using them are doing so in good faith, trying to work more efficiently and serve their clients better. The problem is not the intention.

The problem is the gap between how AI tools are being used and what the law requires — and that gap is widening.

The Five Most Common Breaches

  • In most cases, this will place you in breach of Article 28 from the first use, Without a DPA, lawful processing is unlikely to be met

  • In certain circumstances, this may constitute a criminal offence under RIPA 2000

  • They don't. AI processing requires separate, specific consent – particularly for health and biometric data.

  • Undocumented lawful basis is effectively no lawful basis. Article 6 requires prior documentation.

  • Memory cannot substitute for contemporaneous records. Absence of records is itself evidence.

Get This Sorted In Three Easy Steps

Instant access. Clear guidance. Yours to keep.

STEP ONE

Find what applies to you

STEP TWO

Put the right documents in place

STEP THREE

Start operating with a defensible position