We Didn't Choose This. We Just Adapted.
On the biggest reprogramming of human behaviour in living memory — and what it means that AI arrived at the end of it.
There's a question I keep coming back to, and it's not a comfortable one.
When did we last make a conscious decision about how we live, work, and relate to technology — rather than simply accepting the version of it that arrived?
I'm not asking this to be provocative. I'm asking it because when you actually trace the timeline of the last six years, you realise that the human beings navigating AI in 2026 are not the same human beings who existed in February 2020. We have been changed. Profoundly, rapidly, and largely without our informed consent. And AI — for all the attention it receives — is not the beginning of that story. It arrived at the end of a much longer one.
March 2020
It's worth pausing on what happened, because we are already in danger of normalising it.
In the space of a matter of weeks, the entire pattern of human social and professional life in the UK was restructured. Offices closed. Schools closed. Shops closed. The physical infrastructure through which people met, worked, learned, bought things, received care, and maintained their sense of social existence was suspended.
We went inside. And the only way out — the only way to stay connected, employed, educated, and in some cases sane — was through a screen.
Zoom went from 10 million daily meeting participants to 300 million at its peak. That is not an incremental adoption curve. That is a forced migration. And it affected people across every age group and demographic, including the over-65s who had never owned a smartphone, had never seen any particular reason to, and suddenly found themselves navigating video calls in order to see their grandchildren. They got the devices because they needed human connection. The technology came second. The need came first. That matters.
What Actually Got Reprogrammed
The pandemic changed behaviour. But before we can understand what AI is doing to us now, we need to be honest about what the pandemic left behind.
It left a population that had been trained — through necessity, not choice — to do an enormous range of things remotely, digitally, and without physical presence. Banking. Shopping. GP appointments. Job interviews. Court hearings. Social relationships. Grief. The threshold for what requires a physical human interaction was dramatically and permanently lowered.
And into that altered landscape, at a moment when people were already exhausted and destabilised, came everything else.
A cost-of-living crisis that made financial anxiety mainstream rather than marginal. An energy crisis. The economic shock of a particular period in government that shook confidence in institutional stability. A war in Europe — the kind of geopolitical rupture that an entire generation had been told, implicitly, would not happen again. The death of a monarch who had represented continuity for seven decades. Global scandals that eroded trust in institutions, powerful individuals, and the systems designed to hold them accountable.
Six years of that. Six years of change at a pace that the human nervous system was not designed to absorb.
And then, in the middle of all of it: generative AI.
The Accelerant
ChatGPT launched in November 2022. By early 2023, it had 100 million users. The speed of adoption was, by any historical comparison, extraordinary. But I think it is important to ask why — because the answer tells us something important about where we actually are.
AI did not arrive into a stable, reflective population making careful choices about technology. It arrived into a population that had already spent two years being herded through digital systems by necessity, that was tired, financially pressured, and quietly desperate for anything that made life feel more manageable. It offered to help. And people took the help. Of course they did.
That is not a criticism. It is an observation about context. And context matters enormously when we are trying to understand what AI is actually doing to human decision-making — because the question is not just what AI does. It is who it is doing it to, and what state they were in when they accepted it.
The Outsourcing We Didn't Notice
Here is the thing about AI that I think deserves far more attention than it currently receives.
We talk endlessly about what AI can do — the tasks it can complete, the efficiencies it can generate, the content it can produce. We talk far less about what happens to human beings when those tasks, efficiencies, and outputs were previously the things that required us to think, decide, and be present.
AI is not just automating processes. It is automating judgment. It is suggesting what to write, what to prioritise, what decision to make, what the answer is. And we are accepting those suggestions — not always, not blindly, but routinely, and with decreasing friction — because we are tired, time-poor, and because the suggestions are often good enough.
The question is what we are losing in the process. Not dramatically. Not all at once. But gradually, through accumulated habit, in the same way that any skill atrophies when it goes unused.
When a professional routinely lets AI draft their communications, write their reports, summarise their research, and formulate their recommendations — and when that professional has also spent two years in a pandemic that compressed their capacity for sustained cognitive effort — what remains of the independent judgment they were trained to apply? I don't know the answer. But I think it's a question we need to be asking.
The Generation Left Behind
There is a dimension of this that we are not talking about honestly enough.
The UK's NEET figures — young people not in education, employment, or training — are at levels that should be causing serious political and social alarm. And while the causes are complex, one of them is not complicated at all: the entry-level jobs that have historically provided young people with their first experience of work, responsibility, and professional identity have been automated away.
Self-service checkouts replaced cashiers. Automated order systems replaced waiting staff. Warehouse robots replaced pickers and packers. AI chatbots replaced first-line customer service agents. The work that used to be the bottom rung of the ladder — imperfect, sometimes tedious, but formative — has been quietly removed.
For the companies that made those changes, the business case was clear. For the young people who needed those roles, the consequence is a labour market where the ladder has been shortened from the bottom, and the only available rungs are ones for which they have no experience, because they were never given the opportunity to acquire it.
This is not a technology story. It is a human one. And it is happening at the same time as the broader reprogramming I've described — so we have a generation entering adulthood in a world that has been reshaped by a pandemic, destabilised by a series of cascading crises, accelerated by AI, and from which the most accessible entry points to working life have been removed.
What Responsible Thinking Looks Like
I want to be careful here, because I am not making an argument against automation or AI. The technology is not the villain. The absence of considered thinking around its human consequences is.
When an organisation deploys AI or automation in a way that displaces human roles, it has a responsibility to ask what happens to the people those roles belonged to. Not as an abstract CSR exercise, but as a genuine ethical question about the social contract between organisations and the communities they operate in.
When we build systems that progressively remove human judgment from consequential decisions — hiring, lending, healthcare, welfare — we need to be honest about what accountability means when those decisions go wrong, and who bears the consequences.
When we introduce AI into workplaces that contain people who have already been through six years of upheaval, anxiety, and forced adaptation, we need to ask whether those people have the psychological and professional resources to engage with it thoughtfully — or whether they are simply going to accept it because they are too tired to do otherwise.
A Final Thought
We are six years on from the moment that, looking back, began the most compressed reprogramming of human behaviour in living memory. Most people have not stopped to examine what happened to them — not because they lack the capacity, but because the pace of change has not given them the space to do it.
AI will not slow that pace. It will accelerate it. And the gap between what these systems are doing and what most people understand them to be doing will continue to widen.
The humans in this story are not passive. They are adaptive, resourceful, and resilient in ways that consistently exceed expectation. But adaptability is not the same as agency. And what I think we owe each other — as organisations, as professionals, and as citizens — is the honesty to distinguish between the two.
We didn't choose this version of the world. We adapted to it. The question for the next six years is whether we are going to be any more intentional about the version that follows.
Views expressed are personal reflections on technology, society, and human behaviour. Nothing in this piece constitutes legal or professional advice.