The Gadget in the Room: Why AI Wearables in the Workplace Deserve a Much Harder Conversation

There's something quietly remarkable happening in offices, meeting rooms, and conference calls right now. Alongside the laptops and phones we've long accepted as standard fixtures of working life, a new category of device has slipped in almost unnoticed — AI-powered glasses, smart earbuds, and wearable cameras. They look like ordinary accessories. Many of them are, frankly, impressive pieces of engineering. And therein lies the problem.

We've welcomed them in the way we tend to welcome any sleek, well-marketed piece of technology: with curiosity, a little envy, and almost no scrutiny at all.

The "Cool Gadget" Blind Spot

It's worth being honest about why this has happened. AI wearables are genuinely interesting. A pair of glasses that can transcribe a meeting in real time, or earbuds that can translate a foreign language as you speak, represent a meaningful leap in what consumer technology can do. They're conversation starters. They signal a certain kind of forward-thinking. In a professional environment where innovation is valued, wearing one can feel like a statement of intent.

But the moment we frame something as a gadget — as a toy, a novelty, a fun addition to the toolkit — we tend to stop thinking critically about what it actually does. And what these devices actually do is record. Continuously. In spaces filled with other people who have not agreed to be recorded.

That's not a minor technical footnote. That's the entire issue.

What's Actually Happening When You Wear One

When someone puts on a pair of AI glasses or taps a smart earbud to start their assistant in an office environment, several things happen simultaneously that most bystanders are completely unaware of:

Conversations are captured — not just the wearer's side, but everyone in the room. Faces may be scanned and potentially identified. Meeting content, strategy discussions, client details, and personal conversations can all be swept up in the recording. That audio, video, or transcription data is then typically transmitted to a cloud server — often operated by a third party, potentially in a different country — where it may be stored, analysed, and retained for purposes that are opaque even to the person wearing the device, let alone to those being recorded.

In a home or purely personal setting, this raises ethical questions. In a professional workplace, it raises serious legal ones.

The Laws These Devices Are Quietly Ignoring

This is where the conversation becomes genuinely uncomfortable, because the legal exposure here is not hypothetical or distant. In the United Kingdom, several pieces of legislation are directly and materially relevant:

The UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 require that personal data — which includes a recorded voice, a recognisable face, or any information that can identify an individual — must be collected lawfully, for a specified purpose, with a legitimate basis, and with the knowledge of those affected. Covertly recording colleagues in a meeting room almost certainly fails every one of those requirements. The Information Commissioner's Office (ICO) has been clear that workplace surveillance is subject to GDPR principles, and that employees have a reasonable expectation of privacy even at work.

The Regulation of Investigatory Powers Act 2000 (RIPA) is a more pointed piece of legislation still. It governs the interception of communications, and while it is primarily aimed at public authorities, its principles — and the spirit of related case law — make covert recording of private conversations a legally precarious act for anyone, not just state actors.

The Human Rights Act 1998, which incorporates Article 8 of the European Convention on Human Rights, protects the right to privacy. Employees do not forfeit that right by walking into an office.

Common law on breach of confidence has long established that information shared in professional contexts carries an expectation of confidentiality. Recording and transmitting that information without consent can constitute a breach of those obligations.

There is also a more specific corporate dimension. For organisations operating under regulated frameworks — financial services under the Financial Conduct Authority, healthcare under NHS governance, legal practices under the Solicitors Regulation Authority — the covert capture and export of client or patient data could trigger serious regulatory consequences, potentially including prosecution.

And yet, how many of us have given any of this a second thought when someone walked into a meeting wearing a new pair of AI glasses?

The Data That Leaves the Building

Here is the element of this issue I keep returning to, because I think it is the least understood and the most consequential.

When data is recorded by an AI wearable and transmitted to a cloud platform, it leaves the physical and legal environment of the organisation. It passes from a space where your company has (or should have) data governance policies, access controls, and retention rules — into a space governed entirely by the terms and conditions of a consumer technology company. Terms and conditions that very few people have read, and that can change at any time.

Where is that data stored? For how long? Who can access it? Is it used to train AI models? Can it be subpoenaed? What happens if the company is acquired, goes bankrupt, or suffers a data breach?

These are not paranoid questions. They are basic due diligence questions that any data protection officer would ask before onboarding a new enterprise software product. But we are not asking them about devices being walked into our offices by individual employees acting entirely on their own initiative.

This is a governance gap with real consequences. Confidential client discussions, commercially sensitive strategy sessions, HR conversations — all potentially recorded, all potentially residing on servers that your organisation has no contractual relationship with and no visibility into.

The Consent Problem

There is a principle in data protection law that is both simple and profound: people have the right to know when their data is being collected. Consent must be informed, specific, and freely given. You cannot consent to something you are unaware of.

When a colleague activates an AI wearable in a shared space, every other person in that space becomes a data subject — almost certainly without their knowledge, certainly without their consent. In a small meeting of four people, the device may be capturing the voices, faces, and content of three individuals who have agreed to nothing.

This is not a grey area. It is a clear infringement of the right to informed consent as it is understood under UK GDPR. The fact that it is happening informally, through a personal consumer device rather than an organisational system, does not make it legally acceptable. It may actually make it harder to manage, because there is no obvious responsible party and no audit trail.

Technology Runs Faster Than the Law — But the Law Will Catch Up

This is a pattern we have seen before. Social media arrived before we had frameworks for defamation online. Smartphones arrived before we understood the implications of constant location tracking. Algorithmic decision-making arrived before we had meaningful rules about automated profiling. In each case, the technology moved quickly, society adapted enthusiastically, and the legislative and regulatory apparatus spent years catching up.

AI wearables are following exactly the same trajectory. They are already in workplaces. Meaningful, enforceable workplace-specific regulation is not. The ICO has issued guidance on employee monitoring more broadly, and the Employment Practices Code provides some framework, but there is no specific, comprehensive regulation addressing the use of personal AI recording devices in professional environments.

That regulation will come. The question is what happens to the data in the meantime — and who bears responsibility for the harm caused during the interval between technological adoption and legal accountability.

What Responsible Organisations Should Be Doing Now

I want to be clear: I am not arguing that AI wearables are inherently bad, or that people should not own them. The technology itself is not the villain here. The problem is the absence of considered policy around how and when it is used.

A sensible organisational response does not require banning devices. It requires thinking. Specifically:

Clear workplace policies should address AI wearables explicitly — not as a catch-all under a general "personal devices" clause written in 2015, but with specific reference to recording-capable AI devices, where they can and cannot be activated, and what the consequences of non-compliance are.

Designated areas and meeting types should be established where AI recording devices are prohibited. Client meetings, HR conversations, board discussions, and any session involving commercially sensitive information should be treated as restricted environments by default.

Informed consent mechanisms should be considered for any professional context in which AI tools are legitimately used — similar to the recorded-line disclosures that financial services firms already deploy. If a meeting is being captured, everyone in the room should know it.

Data processing agreements should be reviewed. If employees are using AI tools that transmit data externally, organisations need to understand what that means for their own GDPR obligations. In many cases, an organisation could be considered a data controller for data captured by an employee using a personal device in a work context.

Education rather than prohibition is, ultimately, the most durable solution. Most people wearing these devices are not acting with malicious intent. They genuinely have not thought through the implications. Helping people understand what their devices actually do — and what the professional and legal consequences can be — is more effective than a policy document nobody reads.

A Final Thought

We are at a genuinely important inflection point. The devices in question are only going to become more capable, more miniaturised, and more ubiquitous. The gap between what they can do and what most people assume they do will only widen.

The question is not whether AI wearables have a place in professional life — they probably do, in many contexts, with appropriate safeguards. The question is whether we are going to think carefully about that place before it is decided for us, by default, through accumulated habit and a thousand individual purchasing decisions made on the basis of novelty rather than scrutiny.

The gadget in the room is recording. It might be time to ask who gave it permission.

Views expressed here are personal reflections and do not constitute legal advice. If your organisation is reviewing its approach to AI device use in the workplace, please consult a qualified data protection or employment law specialist.

Previous
Previous

We Didn't Choose This. We Just Adapted.