I was on a panel earlier this week about the use of AI in the public sector when a question from the audience crystallised something that has been troubling me for some time. The questioner described how their department had traditionally relied on analysts to manually clean data for clinical purposes—a painstaking task that required skill, contextual awareness, and institutional knowledge. A new machine learning system had been implemented to automate this work, freeing up analysts for more engaging tasks.
On the surface, this seemed like an obvious win. Data cleaning is notoriously unpopular amongst analysts, and automation promised efficiency gains. But the questioner raised a deeper concern: what happens when the skills and contextual understanding needed for data cleaning disappear from the workforce? What occurs during a system outage when no one remembers how to do the job manually?
The Cognitive Atrophy Problem
This concern has been validated by recent research from Microsoft and Carnegie Mellon University, which found that the more we rely on AI systems, the more our cognitive faculties become “atrophied and unprepared” for when they’re needed. The study revealed that increased confidence in AI capabilities correlates with decreased critical thinking, creating a dangerous dependency loop.
This phenomenon isn’t unique to AI. Aviation has long grappled with similar challenges through the “ironies of automation.” Commercial pilots use autopilot for up to 99% of flight time, yet must maintain manual flying skills and situational awareness precisely because they need to take control during critical moments. The aviation industry recognises this tension and mandates ongoing training to prevent skill decay.
From Individual to Institutional Risk
What we’re witnessing represents more than individual skill atrophy—it’s the systematic erosion of institutional memory and capability. Organisations are human constructs built on collective endeavour. While people provide creativity and judgement, processes and protocols serve as the organisation’s institutional memory, encoding lessons learned, preserving knowledge, and guiding future decisions.
But this institutional memory isn’t just stored in formal procedures—it lives in the tacit knowledge of workers who understand the context, workarounds, and nuanced judgements that make systems actually function. James C. Scott, in Seeing Like a State, illustrates this through the concept of “work to rule,” where rigid adherence to formal procedures brings entire industries to a halt. It’s the workers’ goodwill, agency, and institutional knowledge that enable organisations to adapt, innovate, and respond to unexpected challenges.
The Commercial Capture of Public Memory
The privatisation concern becomes acute because unlike previous waves of automation that replaced manual tasks, AI systems are absorbing and encoding the cognitive processes that constitute institutional memory. As these proprietary systems are trained on organisational data and decision-making patterns, they don’t just automate tasks—they capture the institutional knowledge that was once held collectively by the workforce.
This creates a troubling dynamic, particularly in the public sector. Cash-strapped public organisations, facing increasing demand for services, are understandably attracted to AI solutions that promise efficiency gains. But as these systems become embedded and institutional memory atrophies, organisations become dependent on privately-owned models whose terms of service, pricing, and availability are subject to corporate rather than public interest.
It’s reminiscent of the aviation industry’s “Power by the Hour” model, where airlines don’t buy engines but purchase power output—the more you use, the more you pay. Except in this case, what’s being purchased isn’t just computational power, but the very capacity for institutional thinking and decision-making.
The Stickiness Trap
Digital platforms are designed with “stickiness”—friction that makes it costly and difficult to leave. But when applied to institutional AI, this stickiness becomes a form of cognitive lock-in. The organisation’s ability to think independently about its core functions becomes dependent on the commercial platform. Staff lose the skills to perform critical tasks manually, institutional processes become optimised around the AI system’s capabilities, and switching costs become prohibitive.
Towards Institutional Sovereignty
This doesn’t mean rejecting AI—the efficiency gains and analytical capabilities it offers are genuine. But it does require thinking seriously about institutional sovereignty and resilience. How do we harness AI’s benefits whilst maintaining our capacity for independent judgement and decision-making?
The aviation industry’s approach offers one model: mandatory training to maintain manual skills even when automation handles routine tasks. For public institutions, this might mean preserving core analytical capabilities within the workforce, maintaining hybrid human-AI processes for critical functions, and ensuring that institutional knowledge remains accessible and transferable.
More fundamentally, it requires recognising that the adoption of AI in public institutions isn’t just a technical decision—it’s a question of democratic governance and institutional autonomy. Who controls the systems that shape how public decisions are made? And what safeguards exist to ensure that institutional memory serves public rather than private interests?
These questions become more urgent as AI systems become more sophisticated and pervasive. We need a serious conversation about how to maintain institutional resilience in an age of algorithmic dependency, before we find ourselves unable to remember how to think for ourselves.