
The important question is not “where can AI replace human effort?” but “what human capabilities do we need to protect, develop, and deliberately maintain, in order to realise the benefits of AI?”
Most organisations adopting AI focus on the technology: which tools to use, how to integrate them, and what efficiency gains to expect. It is easy to see why people focus on the technicals, they sit in standard operating procedures and are highly visible, but the issues which are likely to create the greatest risks are not technical, they are human.
Risks in the transition to a human-agentic system span the entire business cycle from execution errors through procedural drift to cultural decay. As always, however, human factor risks tend to get overlooked, because they are hard to quantify and because there is a deeply rooted human tendency to believe “this time is different”.
There is, however, plenty of excellent research we can learn from, particularly the work in cognitive systems engineering from the likes of Lisanne Bainbridge, David Woods, Erik Hollnagel, and James Reason, who have studied how people interact with automated systems for decades.
Their findings translate directly to the current wave of AI adoption, and they point to three categories of risk organisations are likely to underestimate.
Lisanne Bainbridge identified the paradoxical dynamic of increased automation increasing operational risk by weakening human capability, back in 1983. There are three interlinked but distinct issues to watch for:
Skill degradation - when AI handles routine tasks, people lose the low-stakes repetitions through which expertise is built and maintained. Skills do not just plateau, they atrophy. The practitioner who once developed judgment through accumulated experience becomes a manager of outputs they had no hand in producing.
Verification without comprehension - This is a different problem to skill loss, and in some ways a more insidious one. Workers learn to check AI outputs to confirm they look reasonable, without retaining the underlying knowledge to evaluate whether they are actually correct. The oversight is real in form but lacking in substance. Errors which would once have been caught pass through undetected, not because people are careless, but because they no longer have the conceptual depth to see them.
Brittleness at the edges - AI systems are optimised for the conditions they are built on. When something genuinely novel or unexpected occurs, the system’s performance degrades and the human, whose depth has eroded through disuse, is poorly equipped to compensate. This is where the first two risks converge and a seemingly capable organisation becomes fragile just when conditions demand resilience.
James Reason’s work on organisational accidents distinguishes between active failures, the visible mistakes people make, and latent weaknesses; the structural factors which make failure likely long before anything goes wrong. AI adoption creates latent conditions at scale, often through decisions which look entirely sensible in isolation but carry unintended consequences in a highly connected system.
Illusion of control - AI dashboards and performance indicators create a persuasive and comforting feeling of oversight and understanding. What they do not capture is the deterioration of the adaptive human capacity which sits beneath them. Leaders feel more informed while organisations become more fragile.
Normalisation of reduced redundancy - Each decision to streamline, to remove a check, reduce a team, accelerate a process, appears locally reasonable. Cumulatively, they strip out the buffers and slack that allow organisations to absorb shocks. The organisation becomes optimised for conditions remaining stable, but unprepared for uncertainty and unexpected events.
Accountability and blame migration - When failures do occur, the visible human error is identified and addressed. The latent organisational conditions which made the failure likely, the decisions to reduce redundancy, to cut training, to accept nominal oversight as sufficient, go unexamined. The same conditions persist, and the same failures recur.
Fragmentation of collective knowledge - Finally, as individuals work in AI-assisted parallel, each optimised for its own goals, teams lose the shared situational awareness that makes collective response possible. This is gradual and hard to detect until the moment it matters: when conditions change and the organisation discovers it cannot think or act as a coherent whole.
Fear-driven compliance – Where AI is perceived as surveillance or a precursor to headcount reduction, employees stop raising concerns, withhold discretionary effort, and comply without genuinely engaging. The organisation loses the active, questioning behaviour which catches problems before they become failures, the ingenuity which solves problems is lost and innovation dies.
AI adoption changes how people relate to their work in ways which are likely to carry serious motivational consequences. MindAlpha’s research into human motivation highlights 3 key drivers of individual motivation which could be impacted:
Loss of autonomy - Carrying nominal responsibility without genuine influence or capability is psychologically corrosive. When people feel decisions have effectively been made by the system, and their role is to ratify rather than judge, the motivational consequences are significant. Disengagement follows, not from laziness but from a rational response to a situation in which active contribution feels surplus to requirements.
Skills identity erosion - Professionals who have built careers around particular expertise and who derive a sense of self from that mastery find the work that defined them absorbed into a tool. The expertise still exists, but it is no longer visible, no longer owned by them and no longer the basis on which they are valued. AI tends to absorb the parts of work which are legible and well-defined, measurable, and easy to specify. What remains for humans is the ineffable: judgment calls, relational work, decisions in ambiguous situations. The problem is not just that this residual work is harder. It is that organisations often do not recognise or reward it well, precisely because it is invisible and hard to measure.
Purpose and meaning deficit - Meaning in work is not just about outcomes, it comes from the experience of struggle, iteration and completion; the narrative arc of having made something. AI disrupts this at three levels: It removes the effortful process through which people generate a sense of accomplishment. It obscures contribution, making it genuinely unclear what the person did versus what the system produced, and with it the psychological ownership that makes outcomes feel earned. And it erodes the social recognition that gives professional effort its meaning. When peers and managers can no longer distinguish skilled work from AI-assisted output, the informal systems of acknowledgment that sustain motivation dissolve.
Bainbridge and Reason come at the same problem from different angles, which raises an interesting debate. Bainbridge locates the core risk in the individual: automation degrades the human competence needed to provide oversight. Reason locates it in the organisation: structures and decisions create conditions in which failure becomes likely regardless of individual capability. If Bainbridge is right, the answer is to protect and develop human skill. If Reason is right,the answer is to redesign the system so individual failure is not catastrophic.
In practice, both are true. Investing in human skill without addressing organisational conditions will leave people competent but structurally exposed. Redesigning systems without maintaining human depth produces organisations which are procedurally robust in day-to-day operations but fragile at the edges, where human intuition thrives.
Sustainable AI adoption requires deliberate cultivation of human capability, and organisational structures which do not depend on individuals being perfect. Neither is sufficient on its own.
None of this argues against AI adoption. Far from it. We believe AI has the potential to improve the employee experience as well as significantly boost productivity and increase innovation.
However, it demands an adoption strategy which takes the human dimension seriously and positions it as an operational risk factor, not as an afterthought to be looked after by HR departments.
The organisations most at risk will be those who treat AI as a solution to human unreliability, and fail to recognise it as a new source of systemic risk in its own right.
The important question is not “where can AI replace human effort?” but “what human capabilities does this organisation need to protect, develop, and deliberately maintain, in order to realise the benefits of AI?” And the next step is "How do we measure and mitigate the human factor risks?"
The answers will determine whether AI makes an organisation sustainably more capable or merely faster but more fragile.
If you would like to learn more please leave your email below.