The Phenomenology of the Senior
Why the age of AI needs systems thinkers, not tool operators
There is a number buried in Anthropic’s recent research on AI and labor markets that should trouble anyone who thinks about education. Hiring rates for workers aged 22–25 in AI-exposed occupations have dropped by roughly 14% since late 2022. Not unemployment - hiring. The people already in those jobs are rather fine. It’s the door that’s narrowing.
This might sound like a familiar automation story - machines replace workers, workers retrain, the cycle continues. But something structurally different is happening this time. AI is not replacing senior people. It is replacing the tasks that used to make people senior.
Think about what a junior role actually is - in engineering, in consulting, in law, in analysis. It is, in most cases, a structured apprenticeship disguised as a job. You do small, well-scoped tasks. You write the boilerplate. You summarize the documents. You build the components someone else architected. And in doing so, over years, you absorb the logic of the system you’re operating in. You develop judgment. You become senior.
Now imagine that pipeline with AI handling most of those tasks - faster, cheaper, without needing health insurance. The company is rational: why hire a junior to do what Claude does in seconds? But here is the catastrophe hiding inside that rationality: if nobody does the junior work, nobody develops the senior thinking. The apprenticeship pipeline breaks.
---
I did not understand any of this when I was twenty years old, sitting in a seminar room in Saint-Petersburg, a bit skeptical at being assigned four hundred pages of Hegel for the following week.
For five years, that was the rhythm of my philosophy education in Russia. You read - not an article, not a chapter, but an entire dialogue by Plato, or a large chunk of Kant’s Critique, or two hundred pages of Karl Popper or Paul Feyerabend’s takes on the nature of science. Then you came to the seminar and you discussed it. No PowerPoint. No “key takeaways.” You sat with the text and you argued about it until something either broke open or didn’t.
It felt, at the time, spectacularly useless. A 19th-century pedagogy surviving well into the 21st by sheer institutional inertia. When I later came to France and encountered the way philosophy was taught there at the university - a semester-long seminar on a single author, a hyper-specific question, a tightly scoped research output - I thought: this is how modern education should work. Focused. Efficient. Professional.
I was wrong. Or rather - I was wrong about what the “old” model was actually doing to me.
What those years of wrestling with entire philosophical systems built was not expertise in Hegel or Kant. It was something harder to name and far more durable: the capacity to hold a large, contradictory architecture in my head. To see the structure of an argument before engaging with its details. To recognize when two apparently unrelated frameworks share a deep grammar - and when they don’t, despite surface similarity. To tolerate ambiguity long enough for a pattern to emerge.
It took me a long time to realize this was not a useless skill. It was the skill. The French university model - specific, scholarly - trained me to know one thing well. The Russian model - brutal, sprawling - trained me to think in systems. And systems thinking, it turns out, is something what the age of AI seems to demand most and produces least.
---
David Epstein made the case for this before AI made it urgent. His book Range - published in 2019, which already feels like a different geological era - argues that generalists, not specialists, are the ones who thrive in complex, unpredictable environments. The distinction he draws is between “kind” and “wicked” learning environments. Kind environments have clear rules, tight feedback loops, and repeating patterns — chess, golf, classical music performance. Specialists dominate there. Wicked environments are the opposite: ambiguous, ill-defined, full of novelty, governed by rules that shift under your feet. In wicked environments, the generalists - Epstein’s “foxes,” borrowing from Philip Tetlock - consistently outperform the specialists, the “hedgehogs” who see everything through one disciplinary lens.
Epstein’s evidence is wide-ranging - from the career trajectories of Nobel laureates (who are far more likely to have serious hobbies outside their field than other scientists) to the development paths of elite athletes (who typically *don’t* specialize early, contrary to the Tiger Woods mythology). The pattern is consistent: delayed specialization, broad sampling, cross-domain analogical thinking - these are what produce breakthroughs in complex fields.
Now consider what AI does to this picture. It automates the kind environments almost completely. The tasks with clear rules, tight feedback loops, and repeating patterns — writing boilerplate code, summarizing documents, generating standard analyses — these are precisely what large language models handle well. What remains for humans is the wicked territory: the ambiguous problems, the novel situations, the cross-domain judgments. The work that requires holding multiple frameworks in your head simultaneously and knowing which one to apply.
In other words: AI is systematically eliminating the domains where specialists had an edge, and leaving behind the domains where generalists thrive. The bottleneck is no longer execution. It is the quality of thinking that precedes execution.
---
The dominant educational response to all of this has been, so far, almost comically misaligned.
The reflex — visible in universities, governments, and corporate training programs alike — is to teach people to use AI. And I mean this in the broadest sense. Not just the vulgar version: the prompt engineering workshops, the “how to talk to ChatGPT” webinars. Also the more sophisticated version: the bootcamps on building AI agent frameworks, the courses on tool orchestration, the curricula organized around mastering harnesses and models.
All of it shares the same fundamental error. It trains people to operate within a paradigm that will change - and change fast. The models, tools, frameworks, and interfaces of 2026 will not be the tools, frameworks, and interfaces of 2028. Building educational programs around the current technology is like building curricula around a specific model of loom during the Industrial Revolution. The loom changes. The person trained on the loom is stranded.
We did not respond to industrialization by teaching everyone to operate specific machines. We built systems of general education - institutions designed to develop the capacity to think, not to operate. The fact that we seem to be forgetting this lesson in the face of AI is, frankly, alarming.
What does not change - what has not changed in centuries - is the capacity to define a problem well. To see the feedback loops in a system. To reason about second-order effects. To hold multiple contradictory frameworks in mind and know when each one applies. These are not “AI skills.” They are thinking skills. And they are exactly what gets lost when education pivots toward training people on the tool of the moment.
---
Let me make this concrete with an example from the domain people most associate with AI: coding.
Someone can start working with Claude Code today and ship features almost immediately. The tool is powerful. It handles syntax, boilerplate, standard patterns, and even moderately complex implementations with remarkable competence. A person with no prior engineering experience can produce working software — something that would have been unthinkable five years ago.
But can that person do system design?
System design is not about writing code. It is about understanding why an architecture is shaped the way it is. It is about anticipating failure modes before they materialize. About seeing the tradeoffs between consistency and availability, between simplicity and extensibility, between what the system needs to do today and what it will need to do in two years. It is about recognizing which patterns from one domain transfer to another - and which analogies are misleading.
This is, in a precise sense, systemic thinking applied to software. And it cannot be acquired by learning to use a tool, however powerful the tool is. It requires having built mental models — from engaging with complex systems, from seeing architectures succeed and fail, from reasoning about wholes rather than parts. Those mental models can come from engineering experience, yes. But they also come from mathematics, from philosophy, from political economy, from biology, from history - from any discipline that forces you to think about how parts compose into wholes and how systems behave in ways their components don’t predict.
The senior engineer’s advantage over the junior one was never primarily about knowing more syntax or having memorized more APIs. It was about judgment - the ability to see the whole system and make decisions that account for its complexity. That is exactly the kind of thinking that a broad, rigorous, system-level education develops. And it is exactly what no amount of AI-tool training will produce.
---
This cannot be solved at the company level.
Companies are rational actors. They optimize for output. If AI handles the tasks that juniors used to do, companies will - quite reasonably - stop hiring juniors to do those tasks. They will hire fewer people and expect those people to operate at a higher level from day one. This is already happening. The Anthropic data captures the beginning of it.
But what is rational at the firm level is catastrophic at the system level. If every company stops investing in junior development because AI handles junior work, then the entire pipeline that produces senior thinkers collapses. No one is doing it on purpose. It is an emergent failure - the kind of thing that only becomes visible when you look at the system as a whole, which is, not coincidentally, exactly the kind of thinking I have been arguing we need more of.
This is where education has to intervene. And this is where the current trajectory of educational reform is most dangerous. The pivot toward “AI skills” - whether that means prompt engineering or framework mastery or “digital fluency” - is doubling down on precisely the wrong thing. It is training people for the tasks that AI will eat next, not for the judgment that AI cannot replace. It is producing tool-operators in an era that desperately needs system-thinkers.
---
I do not want to end with a labor market argument. The framing of “how do we get people re-employed” - while not wrong, exactly - is too small for what is actually happening.
It is not even clear what kind of labor market we will have. The Anthropic research itself highlights the gap: 94% of tasks in computer and math occupations are theoretically feasible for AI, but only 33% are currently covered. That gap will close. And when it does, the very concept of “junior” and “senior” may dissolve into something we do not yet have language for. Talking about smoother “workforce transitions” assumes a destination that looks roughly like where we came from. That assumption is probably wrong.
The real question is larger, and, I think, more hopeful. If AI compresses the distance between intention and output, if execution becomes genuinely cheap, then the scarce thing is no longer the ability to do. It is the ability to see - to see what matters, to see how systems interact, to see the second-order consequences of choices, to see the difference between a problem worth solving and a problem that merely looks like one.
That capacity - let us call it what it is: judgment - is not a job skill. It is a form of human agency. And it is cultivated not by training people on tools but by immersing them in complex, demanding, sometimes maddeningly abstract systems of thought. By making them read four hundred pages of Hegel and then defend a position. By exposing them to biology and economics and philosophy and engineering - not so they become dilettantes, but so they develop the cross-domain pattern recognition that David Epstein documents and that AI makes indispensable.
The generalist education I am arguing for is not a way to stay employable while the machines advance, but rather the precondition for a new kind of human agency. One where people do not merely operate systems, but design the systems worth building. Where the question is not “how do I use this tool?” but “what should exist that does not yet exist - and why?”
The exposure gap between what AI can theoretically do and what it currently does gives us time - but not unlimited time. The choice we face in education is not between “traditional” and “modern.”, but between producing people who can think about the whole - and producing a generation of tool-operators who become obsolete the moment the tool updates.
The irony, of course, is that the most future-proof education might look less like a 2026 AI bootcamp and more like a seminar room in Saint-Petersburg, circa 2009, with four hundred pages of Hegel on the table and absolutely no idea what it would turn out to be good for.

