AI’s Capability Compression: How Work Is Changing in the Age of Automation
- Guy van der Walt
- Feb 1
- 17 min read
Updated: Feb 2

Imagine a seasoned marketing manager, Jane, with 15 years of experience. She used to lead a team of five, each handling tasks from copywriting to data analysis. Today, thanks to AI tools, Jane’s team has shrunk to two, but their output has doubled. Routine tasks that once took days (sifting through market research, drafting reports, A/B testing ad copy) are now largely automated. Jane finds herself spending less time executing those tasks and more time deciding which strategies to pursue next. Her situation is increasingly common: AI hasn’t eliminated her job, but it has compressed the capabilities needed for that job into a smaller, more productive package. This phenomenon, capability compression, is quietly redefining the nature of work in many fields.
In the public discourse about AI and jobs, much attention goes to whether whole professions will vanish. But the more immediate transformation is subtler: AI is making each role more productive, enabling one person (or a small team) to accomplish what used to require many. The capability of an entire department can be compressed into a single AI-augmented role, fundamentally altering how we define roles, skill sets, and career paths. Rather than a sudden mass unemployment event, we’re witnessing a shift where the centre of gravity in work moves from roles anchored in execution to those focused on judgment, creativity, and decision-making. Below, I explore what capability compression means, how it’s drawing a new line between “doers” and “deciders,” why mid-career professionals feel the squeeze most, and why epistemic discipline may become the defining skill of the AI era.
Capability Compression: Doing More with Less
“Capability compression” refers to AI’s knack for shrinking the human effort needed to achieve a given output. In other words, AI compresses multiple layers of skill or labour into a leaner process. A small example: a single customer support agent armed with an AI chatbot can handle what a team of support reps used to, instantly resolving common queries and only escalating the truly tricky issues. In one fintech outsourcing operation, the introduction of AI meant they could deliver “institutional-grade operations without institutional capital expenditure,” achieving the kind of service quality that once required an entire division of staff. In plain terms, one AI-empowered person can now do the work of many. This boosts productivity, but it also means organizations may need fewer people for the same scope of work.
Concrete signs of capability compression are appearing across industries. On Wall Street, for example, executives surveyed recently expected on average a 3% reduction in their workforce as AI takes over certain tasks, with a quarter of firms anticipating headcount cuts of 5–10% in back-office and support roles. These aren’t just cost-cutting layoffs – they reflect the fact that the remaining employees, augmented by AI, can handle the workload of the ones who aren’t replaced. AI is acting as a force multiplier for human productivity, and companies are adjusting accordingly.
It’s important to frame this carefully: AI is not (yet) a wholesale job destroyer, but it is an aggressive task automator. Many roles will survive, but their content will change. A financial analyst, for instance, might still have a job title and salary, but instead of manually crunching numbers and drafting reports, they might supervise an AI that does those things, then focus on interpreting results and making decisions. The role remains, yet one analyst with the latest tools can produce what a team of four did before. As one tech strategist quipped, managers often ask “Can AI substitute for four people I don’t have?” – and increasingly, the answer is yes. This is capability compression in action: each person’s capacity is amplified, and fewer people are needed to achieve the same output.
Vivid examples of capability compression abound:
In software development, an experienced programmer using AI code assistants can complete projects significantly faster, effectively doing the work of several average coders. In testing scenarios, developers have reported AI helping them cut coding time by more than half, letting small teams achieve in weeks what once took months.
In marketing, one content creator armed with generative AI can generate drafts for blogs, social posts, and ad copy in a day – work that might have required a whole content team in the past. That creator’s role shifts from writing every word to curating, editing, and injecting human insight into AI-generated material.
In customer service, AI chatbots and voice assistants now resolve a large percentage of routine inquiries. A single human agent supervising these AI systems can oversee 70%+ of incoming queries being handled automatically, stepping in only for complex cases. The result is that support centers can serve more customers with fewer agents on the payroll.
This dynamic doesn’t mean humans are obsolete – but it does mean that the value of a human worker is moving up the value chain. If AI has compressed the lower-level capabilities of a role into a utility, what’s left for the human is the higher-level thinking. In short, the leverage of each worker increases, and so does the expectation on what that worker should accomplish. This brings us to the emerging divide between roles that are primarily about execution and those that are about decision and direction.
The Execution–Decision Divide: Doers vs. Deciders
AI’s rapid progress is drawing a fresh line in the workforce between roles centered on execution and those anchored in decision-making and ownership. In the past, many mid-level jobs combined a bit of both: you executed tasks and made some decisions. Now, the execution part is increasingly handled (or at least heavily assisted) by machines. What’s left – and what’s growing in importance – are the parts of work that involve human judgment, strategic thinking, creativity, and leadership. The result is a widening gulf between “the doers” and “the deciders.”
Execution-centric roles are those that consist largely of following established processes, applying known techniques, and producing defined outputs. Think of an accounts payable clerk processing invoices, or a junior lawyer reviewing documents for relevant information – these are roles built on executing a series of steps or analyses. AI excels at exactly these kinds of routine, rules-based, or data-intensive tasks. It’s no surprise that roles “centered on execution” are the first in line for automation and displacement. A recent whitepaper from the Centre for Finance, Technology and Entrepreneurship (CFTE) bluntly predicts “mass displacement” of jobs that are purely execution-focused. In plain English, if your job is mainly doing what someone tells you to do (or what procedure dictates), an AI can probably learn to do it – and eventually do it faster and cheaper.
On the other side of the divide are decision-centric roles – jobs that involve defining what needs to be done, making sense of ambiguity, exercising judgment, and taking responsibility for outcomes. These include strategic decision-makers, project leaders, innovators, and experts who own the decisions rather than just carry them out. Such roles might be, for example, a product manager deciding which features will best serve the market, or a senior engineer determining the architecture of a new system. AI can provide analysis and options, but deciding which option to pursue (and why), or coming up with a novel approach to an undefined problem, remains largely a human domain. CFTE’s research echoes this: it foresees the rise of “supercharged professionals” – people who use AI to greatly expand their scope and impact – and a small group of “creative disruptors” inventing new products and models. These groups correspond to humans shifting into more augmented, decision-oriented roles, leveraging AI for execution. In other words, those who can pair domain expertise with strategic judgment and AI tools will become exponentially more productive (hence, “supercharged”) instead of obsolete.
I can put it more simply: AI is shifting the value of human work from doing to deciding. The “doers” – people who excel at carrying out tasks – will increasingly either see their tasks automated or will need to evolve into “deciders” – people who figure out which tasks should be done, how they should be approached, and what the goals should be. In practice, this means many job descriptions are being rewritten. Take project engineering: a decade ago a mid-level engineer might spend days manually tweaking designs or running simulations (execution work). Now much of that can be done by AI in minutes. So the engineer’s value-add shifts to choosing what to simulate, interpreting the results, and deciding on design directions (decision work). The ones who thrive will be those who embrace that shift, moving up from the weeds of execution to the vantage point of oversight, strategy, and decision.
This execution–decision divide also explains why experience and context are still so vital even as AI grows more capable. A senior lawyer or doctor, for instance, isn’t just a walking database of knowledge (AI can often beat humans at pure retrieval of facts); rather, their value lies in years of contextual understanding – knowing which questions to ask, which approach fits a nuanced situation, what trade-offs to consider. That kind of holistic judgment is hard to code. It’s why AI, for all its prowess, ends up highlighting other skills in humans: leadership, ethical reasoning, empathy, multidisciplinary creativity. These become the differentiators that separate someone who merely uses AI from someone who directs it toward worthwhile goals.
However, not everyone finds it easy to leap from execution to decision-making. This is particularly a concern for those in the middle of their careers, who built their professional identity and skill set in an era when execution was king. All of a sudden, the rules of the game are changing around them. The mid-career professionals are living through this transition most acutely.
The Mid-Career Squeeze: Caught in Transition
For mid-career professionals (often those in their late 30s, 40s, and 50s), AI’s workforce transformation can feel like the rug is being pulled out from under them. Many of these individuals grew up in their industries mastering the very tasks and tools that AI is now automating. They are the ones who, a decade or two ago, were the “doers” who gradually became deciders as they rose in the ranks. But unlike today’s entry-level workers, they didn’t start their careers with AI as a given. The rules and ladder they climbed are shifting dramatically mid-game.
Picture Michael, a 42-year-old data analyst. In the early 2010s, Michael’s expertise in painstakingly gathering data, cleaning it, and producing reports made him a go-to person at his firm. He earned promotions by being fastidious and detail-oriented – a great “doer.” Now, off-the-shelf AI tools can do 80% of that grunt work in a flash. Suddenly, junior analysts armed with these tools can generate insights almost as quickly as Michael can, even if they lack his depth of experience. Michael finds himself squeezed: the entry-level has effectively risen to meet him, and the expectations for his role have moved upward. He’s now asked to focus on higher-order analysis, to oversee multiple AI-augmented juniors, and to ensure that the AI’s outputs actually make sense in context. This requires a different skill set than the one he originally honed.
Mid-career professionals like Michael are wrestling with a few distinct challenges:
Relearning the tools: The toolbox has changed. Skills that were highly prized 15 years ago (like manual Excel wizardry or writing flawless code from scratch) are less valued when AI can perform them. Instead, fluency in AI and data tools is becoming essential. Those who don’t update their toolset feel like they are speaking an outdated dialect in their field.
Role ambiguity: Their job descriptions are shifting underfoot. A report by an industry group noted that many entry-level engineering roles are being redefined or merged due to AI efficiency gains. That means the traditional ladder (Junior → Mid-level → Senior, each with well-defined duties) is getting compressed. New grads may skip rungs, coming in with AI skills that let them handle higher-level work from day one. This compresses the early career stage and puts pressure on those in the middle who expected a longer runway to gain expertise.
The experience paradox: Mid-career workers do have something juniors don’t – years of domain experience – but they must apply it in new ways. They’re no longer valued for how many tasks they can execute (the AI does a lot of that); they’re valued for how effectively they can translate their experience into guidance for teams and AI systems. That’s a tough shift for someone who took pride in being the best doer on the team. It can feel like a loss of identity, going from player to coach of sorts. And not everyone is prepared or trained for that transition.
Statistically, this “mid-career squeeze” is becoming visible. A study by OpenAI and the University of Pennsylvania noted that a high proportion of tasks in jobs like accounting, legal research, and financial analysis (often mid-career fields) are highly susceptible to AI automation. Tram Anh Nguyen, co-founder of CFTE, has observed that people over 40 in professional roles are among the most at risk of major disruption as businesses integrate AI. This isn’t because 40-year-olds can’t learn – it’s because their roles are precisely those being most reconfigured. Nguyen also points out a hopeful corollary: professionals won’t be replaced if they re-train, and this retraining isn’t just about learning new tech tricks. It’s about a mindset shift to continuous learning and adaptability.
Let’s go back to Jane from the introduction – the marketing manager whose team got smaller as AI took on more work. Jane is mid-career and she’s adapting, but not without difficulty. She’s had to become comfortable managing AI outputs: critiquing AI-generated copy, guiding the AI on tone or strategy, and double-checking facts. She’s also taken on more of a client-facing role, spending more time brainstorming campaign ideas (while the AI drafts the tactical plans). In essence, Jane’s human strengths – her understanding of client psychology, her creative intuition about branding, her ability to connect the dots between a marketing campaign and broader business goals – have become far more important than her speed at writing a press release or crunching survey numbers. Jane admits that if she didn’t have those higher-level skills to fall back on, she’d be in trouble. “If all I knew how to do was crank out social media posts and media briefs the old way,” she says, “I’d be watching an AI do my old job and wondering where I fit.” Many mid-career professionals are in exactly that reflective spot right now, figuring out where they fit in an AI-shaped landscape.
The harsh truth, and urgent call to action, for mid-career people is that comfort with the status quo is a risk. The old advice to “upskill or be obsolete,” while clichéd, has taken on a very real immediacy. One tech services executive put it starkly: “We said that there is no future for single-skilled people… Unless you are multi-skilled with domain understanding, as well as understanding how to use AI and technology, you will not survive.” In other words, a mid-career professional who’s only great at one thing, and that thing is now automated, faces a career dead-end. By contrast, those who combine their domain experience with AI savvy are finding new opportunities. Some are becoming the go-to “AI lead” in their teams, translating between technical AI capabilities and business needs. Others are carving out roles as coaches or mentors for younger staff – not in the old sense of passing down all their hard-won manual techniques, but in the new sense of teaching judgment: how to interpret data, how to question AI’s suggestions, how to avoid common pitfalls. This leads us to perhaps the most critical skill everyone, but especially mid-career professionals, needs to cultivate: epistemic discipline.
Epistemic Discipline: The New Must-Have Skill in the AI Age
Amid all this talk of tools and tasks, there’s a less tangible but crucial ability that distinguishes those who thrive alongside AI: epistemic discipline. Simply put, epistemic discipline is the practice of rigorous thinking and knowledge management – it’s an approach to handling information and truth. In an era where AI can generate endless content, answers, and analyses, the ability to critically evaluate and verify that output is gold. It’s not as flashy as coding or prompt engineering, but it’s far more important in the long run. In fact, some experts argue that the central skill of the AI age isn’t prompt cleverness at all.. it’s epistemic discipline. High performers will be those who know how to separate signal from noise and truth from near-truth in what AI provides.
What does epistemic discipline look like in practice? It means developing habits of mind to avoid being misled by the very tools that amplify your productivity. Key elements include:
Verification over Assumption: Treat every AI-generated output as a draft or a suggestion, not an absolute truth. For example, if an AI summary of a legal case or a scientific report looks convincing, an epistemically disciplined professional will still double-check the source facts. They’ll cross-verify numbers, cite original sources, or run secondary searches to confirm. They understand that AI models, especially large language models, can “hallucinate” false information – producing answers that sound authoritative but are bogus. So they adopt a “trust, but verify” stance with AI at all times.
Critical Questioning: This is the art of asking good questions – both of the AI and of oneself. Instead of accepting the first answer, someone with epistemic discipline will ask follow-ups: “On what data or reasoning was this answer based?”… “What alternative explanations exist?”… “What assumptions am I (or the AI) making here, and are they justified?” They approach AI as a tool for inquiry, not just an answer vending machine. A cultural shift is underway where question literacy – knowing how to probe and prompt effectively – is as important as having the answer. It’s a moral and epistemic discipline to keep questioning rather than passively accepting.
Decomposition and Constraint: Highly effective people in the AI era know how to break down complex problems into smaller chunks that AI can handle, and how to set constraints so the AI works within useful bounds. For instance, rather than asking a broad, vague question and getting a broad, vague answer, they’ll break the problem into parts and give the AI specific, contextual tasks. They might constrain an AI model by providing structured templates or step-by-step checklists, forcing a logical approach. This disciplined structuring of queries helps prevent the AI from going off on tangents and produces more reliable outputs. It’s an advanced form of prompting that reflects clear thinking on the human’s part.
Maintaining Contextual Understanding: Epistemic discipline also involves continuously grounding AI’s output in real-world context and domain knowledge. An AI can churn through a million data points and give you trends, but it takes a disciplined expert to say, “That trend doesn’t make sense given last quarter’s regulatory changes; perhaps the data before that point are weighted too heavily,” or “This marketing copy sounds slick, but is it actually striking the right chord for our specific audience?” In other words, they supply the why and should we that AI alone cannot. They integrate human wisdom with machine output. Organizations that fail to do this can become “informationally rich but epistemically poor” – drowning in AI-generated reports and numbers with nobody to interpret the meaning. Epistemic discipline is the antidote to that, ensuring that knowledge (not just data) is created from information.
Cultivating epistemic discipline is becoming a defining differentiator among professionals. Anyone can now get a “pretty good” answer from an AI with minimal effort. The standout contributors in a team are those who habitually take that answer and refine it, verify it, and place it in context. They are the ones who prevent errors from slipping through, who catch the “edge cases” that an AI missed, and who adapt AI suggestions to the real nuances of a situation. In essence, they turn raw AI output into reliable, actionable intelligence. This discipline also underlies ethical AI use – being aware of biases in AI models, knowing when to override the AI’s judgment, and understanding the limits of what the AI actually knows versus where it’s speculating.
Importantly, epistemic discipline is not just for individual contributors; it needs to be baked into team cultures and management practices. Smart organisations are starting to ask in job interviews not just “Can you use AI tools?” but “How do you ensure the information you get from AI is correct and relevant?”, in other words, “What’s your process for quality control and critical thinking when you use AI?” This line of questioning separates those who use AI mindfully from those who might use it blindly. It’s much like the difference between having a spell-checker and actually knowing the language, one catches obvious mistakes, but the other ensures what you write truly says what you mean.
One might argue that all this sounds like just a posh term for “critical thinking.” Indeed, epistemic discipline is rooted in classic critical thinking skills, but it’s specifically attuned to the AI age. It recognizes that we now have a glut of instantly generated answers – so the premium shifts to those who can sift, judge, and synthesize truth from that glut. It’s a skill of discernment in an era of information overload and machine-generated content. As AI grows more capable, raw knowledge becomes cheap (the AI can supply it); what remains scarce and valuable is the human ability to question, contextualize, and create meaning. Epistemic discipline is exactly that ability.
Conclusion: Beyond the Compression – A New Human Frontier?
AI’s encroachment into the workplace has forced us to reassess what uniquely human work looks like. The idea of capability compression suggests a future where fewer people can achieve more – a single individual, amplified by AI, could run a department’s worth of output. This raises a provocative question: What will we do with that amplified potential? Will companies simply trim headcount and let one person do the work of five? Or will we expand our ambitions, tackling projects that were previously unimaginable because now a small team can handle complexity and scale that demanded an army before?
For mid-career professionals, especially, there is a fork in the road. One path is to cling to the familiar and risk getting left behind as the “doer” jobs dry up. The other path is to embrace becoming a “decider,” to leverage one’s experience in concert with AI, and to double down on the human qualities that machines can’t replicate. The transition isn’t easy, but it can be invigorating. In fact, if AI frees people from many rote tasks, it could liberate human creativity and initiative on a broad scale. Some early evidence is optimistic: engineers report higher job satisfaction when AI handles the grunt work and they can focus on creative problem-solving. We could see similar effects in marketing, law, medicine – imagine professionals spending more time on innovation, brainstorming, and human connection, and less on paperwork and number-crunching. AI, in this sense, could become a catalyst for a more fulfilling work life.
But there’s a catch – this more hopeful outcome won’t happen automatically. It depends on how organisations, educational systems, and individuals respond right now. Will companies invest in upskilling their workforce to take on more decision-centric roles, or will they just hire a new generation that already has those skills? Will mid-career workers receive the support and training to develop epistemic discipline and strategic thinking, or will they be written off as “dinosaurs” as the tech evolves? On a society level, if AI makes the economy more productive with fewer workers, how do we ensure that human talent isn’t wasted and that the gains benefit everyone? These are complex questions with no easy answers yet.
Perhaps the most intriguing question is one of human purpose. If AI handles more of the “doing,” will we redefine success and productivity in terms of how well we think, decide, and imagine? Will we finally value attributes like wisdom, adaptability, and ethical judgment as much as, say, efficiency or output? In a world of compressed capabilities, bigger-picture human skills might become the only skills that matter. As one CEO famously said, “You’re not going to lose your job to AI. You’re going to lose it to a person using AI.” The flip side is, you can be that person using AI – not just using it passively, but harnessing it with vision and discipline.
So ask yourself (and perhaps ask your company and your colleagues): If AI can do in minutes what we used to do in hours, what will we do with the extra time and capability? Will we aim higher, solve tougher problems, craft more meaningful experiences? Will we create new roles that we haven’t even imagined yet (just as past technological revolutions gave rise to entirely new professions)? Or will capability compression simply result in a race to the bottom line, with humans pressured to work ever harder since the bar for “productivity” is raised? The future of work in the AI era hinges on these choices.
AI is compressing our capabilities, yes, but perhaps it’s also expanding a different frontier: our capacity to think and create. The onus is on us to step up. The conversation is no longer about whether “humans still matter.” It’s about how humans can matter more, in the ways that only humans can. AI will handle the rest. The coming years will test our collective resolve to redefine work not as a competition between human and machine, but as a new partnership, one where we let machines shrink the mundane, and we humans stretch the imagination.
In a world where one person, with AI’s help, can do the work of five, do we shrink the workforce to fit the new efficiency, or do we expand our goals five-fold? The answer will shape the future of every mid-career professional today and every new graduate tomorrow. It’s time to decide, and then to act on that decision.





Comments