You cannot court-martial a chatbot. The human is always accountable.
About Louize Clark & AI Policies
The conversation about AI governance has been happening for two years, and most of the people it affects haven't been in the room.
I have spent the last two years in those rooms.
Parliamentary briefings. Diplomatic receptions. Policy discussions with ambassadors from Caribbean nations, African states, and the United States. Conversations with High Commissioners about what AI means for their countries, their economies, and their people. Meetings with senior legal professionals, insurers, and leading technologists. A call for evidence on AI-enabled weapons systems attended by a Secretary of State and some of Europe's most distinguished academic minds. Roundtables with construction industry leaders. Keynotes to frontline workers. An AI Global Summit. The NHS. BBC Studios. The Institute of Directors.
The same conversation, in very different rooms, with very different stakes.
What I kept noticing, in every room, at every level — was the same structural gap. Governments and large organisations have legal teams, compliance officers, AI advisory boards, and external counsel. They have the infrastructure to understand what is coming and respond to it. They are slow, but they are not unprotected.
The therapist working alone from a home office has none of that.
Neither does the personal trainer managing client health data. The aesthetic nurse running her own clinic. The coach building an online practice. The salon owner using a booking platform that holds allergy records. The small business owner with a team of five, all using AI tools every day, with no one in the organisation whose job it is to ask what's actually happening to the data.
These people are not careless. They are not reckless. They are working hard, adopting technology in good faith, and nobody has explained clearly what the law requires of them.
That is not a small problem. It is a structural one. And it exists because the guidance available to them was written for someone else — for data protection officers at large organisations, for corporate compliance teams, for people with the time and resources to commission legal advice at several hundred pounds an hour.
I didn't set out to build a product. I set out to fill a gap that I couldn't stop seeing.
My background spans law, technology, construction, hospitality, marketing, coaching, and AI advisory.
On paper, that looks unconventional. In practice, it has given me something that a single-specialism background rarely does — the ability to see how people, businesses, and systems actually function across industries, not simply how frameworks suggest they should.
After studying Law, I moved into commercial roles and found myself involved in some of the earliest large-scale technology deployments — FIFA World Cup VR experiences, connected commercial kitchens, early AI-driven age verification projects. I was an early beta tester of OpenAI's ChatGPT before AI had entered everyday conversation. I have since advised on AI within healthcare, infrastructure, and the construction sector, and contributed to discussions on responsible AI deployment, growth zones, and national security at senior levels.
The breadth of that experience — across boardrooms, parliamentary rooms, diplomatic receptions, and construction sites — gave me a perspective that I have not found replicated elsewhere. I have been in the rooms where AI policy is being shaped. I have also been in the rooms where people are using AI tools every day without any awareness that regulation applies to them at all.
The gap between those two rooms is where AI Policies UK was built.
One thing became clear through all of it.
You cannot hold an AI system accountable. You cannot prosecute a model, court-martial a chatbot, or send an algorithm to prison. The human is always accountable. The business is always accountable. The sole trader, the practitioner, the employer — they are the data controller, they carry the liability, and the law applies to them whether or not anyone has told them so.
Most haven't been told.
AI Policies UK exists to change that. Every document in the suite is written for a specific profession, built around the specific risks that profession faces when using AI tools with personal data, and designed to be used without professional support. Profession-specific. Verified against live sources. Plain English.
Not because compliance should be easy. But because it should be accessible.
Named in the Top 50 Citiesabc Women Power Leaders, curated by Dinis Guarda.
Speaker: Business ABC AI Global Summit · Houses of Parliament · Institute of Directors · Institute of Construction Management · Building for Humanity · AFC Connect · Royal Automotive Club
AI Policies UK is not a law firm and does not provide legal advice. The suite provides professionally structured guidance for practitioners and small businesses. For complex or high-stakes situations, qualified legal advice remains the right step.
My Mission
Is to create a level playing field in the AI age, where knowledge is not reserved for corporations & no one is expose simply because they weren’t told.