The compliance risks your business
does not know it has yet.
Most compliance thinking focuses on what you do with data deliberately. The AI Safe Risk & Data Pack addresses something different — what AI technology is doing around you, often without a conscious decision on your part. Four policies covering the risks that are easy to miss, easy to dismiss, and increasingly difficult to defend once they have occurred.
The risk is no longer just what you do — it is what the technology is doing around you.Images becoming identifiable without your knowledge. Audio captured passively. AI recreating people without consent. Professionals acting on AI advice outside their competence. None of these require a deliberate decision to create legal exposure.
This pack addresses the layer of risk that the Starter Pack and Legal Pack do not cover. It does not replace them — it extends your protection into territory they were not designed for.
What has changed — and why it matters
| What is happening | What most businesses assume | What is actually occurring |
|---|---|---|
| Client photographs uploaded to AI tools | "It's just a photo." | AI systems can derive health indicators, biometric identifiers, and protected characteristics from images that appear routine. You may be processing Special Category Data without realising it. |
| AI-generated transformation imagery in marketing | "It's just content." | AI-generated results presented as genuine client outcomes may be misleading marketing under the CPUTRs, a CAP Code breach, and — in some cases — a criminal offence under the Online Safety Act 2023. |
| Wearing smart glasses or AI earbuds at work | "I'm not recording anything." | Consumer AI wearables have always-on capture capabilities. Data may transmit continuously without any active decision. Using Live Listen directed at another person without consent is potential criminal interception under RIPA 2000. |
| Presenting AI recommendations to clients | "The AI suggested it." | You suggested it. The moment an AI output is presented under your name, you have endorsed it. You are professionally and legally responsible for every recommendation — regardless of whether it was AI-generated. |
What Is In This Pack — Four Risk Policies
Image & Biometric Risk Policy
For anyone who takes client photographs, uses AI skin analysis, uploads images to AI platforms, or uses AI-assisted aesthetic or body composition tools.
→When before/after photographs become Special Category Data under Article 9
→Separate consent requirements for clinical photography, AI analysis, and marketing use
→DPA and Article 44 requirements for image processing platforms
→Sector guidance for aesthetic clinics, PTs, therapists, and beauty professionals
Article 9 UK GDPR · Article 4(14) · ICO Biometric Guidance 2023
AI Wearables & Passive Data Capture Policy
For any business where staff, contractors, or visitors may wear smart glasses, AI earbuds, or translation devices on premises or in client-facing settings.
→Meta Ray-Ban smart glasses: what they capture, where it goes, and what the law says
→AirPods Live Listen: potential criminal interception under RIPA 2000 s.3
→AI translation earbuds: simultaneous GDPR breaches from a single conversation
→Clear policy rules for client settings, staff meetings, and shared spaces
RIPA 2000 s.3 · IPA 2016 · Articles 5, 28, 44 UK GDPR
Deepfake & AI-Generated Content Risk Policy
For anyone using AI transformation imagery, predicted outcome visualisations, AI-modified before/after content, or AI-generated marketing materials depicting real people or results.
→Mandatory disclosure standard — what labelling requires and where it must appear
→AI-generated testimonials: an absolute prohibition and why
→Consultation visualisation protocol — disclosure before showing predicted outcomes
→Converging liability: CPUTRs, ASA CAP Code, Online Safety Act 2023, passing off
Consumer Protection Regs 2008 · ASA CAP Code · Online Safety Act 2023
AI Boundaries — Scope of Practice Policy
For any professional whose AI tools generate health, clinical, nutritional, psychological, or other specialist recommendations — and who presents those outputs to clients.
→Scope boundaries by sector: what AI may assist with, and what it must never replace
→The 'AI suggested it' problem — why presenting AI output as your assessment is the risk
→Professional body obligations alongside UK GDPR — which standard takes precedence
→Article 22 human review obligations for AI-influenced decisions about clients
Bolam/Bolitho standard · Consumer Protection Regs 2008 · Article 22 UK GDPR
Three of these risks carry potential criminal liability — not just regulatory fines
Using AirPods Live Listen directed at another person without their knowledge may constitute criminal interception under RIPA 2000 Section 3. Maximum sentence: two years imprisonment.
Creating intimate deepfake images without consent is a criminal offence under the Online Safety Act 2023. The offence does not require intent to harm.
Recording a consultation via smart glasses without disclosure creates potential RIPA 2000 exposure regardless of whether the recording was intentional.
THE AI RISK CHECK — APPLIED BEFORE EVERY EMERGING TECHNOLOGY DECISION
Five questions. Before the AI Decision Test. Not instead of it.
The Legal Pack's AI Decision Test asks whether your processing is lawful. The AI Risk Check asks something earlier and different: do you fully understand what this technology is doing? You must be able to answer all five before you run the Decision Test.
1 - Is this capturing more data than I realise — images, audio, biometric signals?
2 - Could this data identify someone uniquely, even if that was not the intention?
3 - Could this data be reused, replicated, altered, or used to generate a likeness?
4 - Am I operating within my professional competence in how I am using this?
5 - Would I be comfortable explaining this decision clearly to a regulator?
If you are unsure — you are already in a risk position. Do not proceed until you can answer all five.
Where These Risks Show up — by Sector
Aesthetic clinics
Before/after images are clinical records and likely Special Category Data. AI transformation imagery in marketing requires mandatory labelling. The 8-year clinical negligence retention period applies to AI-processed images.
Personal trainers
Body composition images and progress photos may constitute health data when processed by AI. AI-generated programmes using health data require explicit Article 9 consent. Scope of practice boundaries apply to AI nutrition and injury recommendations.
Therapists & coaches
AI transcription tools capture visual data from video sessions beyond audio. Any AI tool active during a session must be specifically disclosed. Scope of practice: AI output in therapeutic contexts requires genuine professional review before application.
SME owners & managers
Staff use of wearable AI devices in meetings, HR discussions, and client-facing settings creates institutional liability. AI marketing tools generating content from client data require DPAs. Scope of practice applies wherever AI generates specialist advice presented under your brand.
How this fits alongside the other packs
The Starter Pack covers what you need to know. The Legal Pack covers how to operate safely. This pack covers the risks you do not yet know are there. The three packs work together — this one extends your protection into territory the others were not designed for. It does not replace them.
Recommended Pathways
AI Safe Risk & Data Pack + Companion Guide — £79
Four risk policies, six scenario walkthroughs, and the regulatory watch appendix — the complete emerging-risk framework for practitioners whose AI use extends into imaging, biometrics, wearables, or AI-generated content. Requires Tier 1 and Tier 2 as the foundation.
Aesthetics & Imaging Bundle — £119
The AI Safe Starter Pack, the Aesthetic Clinics Action Guide, and the Risk & Data Pack with Companion Guide — the only bundle built specifically for practitioners using AI imaging tools, where biometric data obligations, the 8-year clinical negligence retention period, and deepfake criminal liability all apply simultaneously.
Full Suite Bundle — £189
All three tiers — the most complete compliance position available for any practitioner or small business using AI tools with personal data. Includes the profession-specific guide, the full legal infrastructure, and the emerging-risk protection most competitors have not yet addressed.