The Apprenticeship Implosion.
Exploring how AI is quietly removing the entry-level work that has, for a century, transmitted senior judgement from one generation of professionals to the next.
Software developers between the ages of 22 and 25 saw their employment fall by nearly 20% from 2024, even as headcount for older developers in the same firms continued to grow. That number, buried in a Stanford Digital Economy Lab paper from late 2025 with the deceptively cosy title “Canaries in the Coal Mine?”, is the most consequential statistic in the entire AI-and-work conversation right now.
It is also the one almost nobody is reading correctly.
The popular framing is robots-take-jobs: AI is eating the entry-level rung of the labour market, and a generation of young professionals is being locked out. That framing isn’t wrong, exactly. It’s just narrow. The deeper story isn’t that early-career people are losing jobs. It’s that we are unwittingly dismantling the system that has, for a century, produced the senior people we depend on.
The apprenticeship — the messy, slow, half-broken, economically inefficient way humans have always trained the next generation of professionals — is imploding. And we don’t have a replacement.
This is a piece about that implosion: where it shows up first, why it’s almost invisible to the firms causing it, and what it costs us when we cash in the seed corn.
A century-old contract, broken in eighteen months
Every knowledge profession runs on the same hidden contract, and it has barely changed since the modern firm was invented in the early 1900s.
A junior shows up. They are bright, ambitious, and largely useless. They are paid relatively little to do work that is, by design, beneath the seniors: the document review, the literature search, the first-pass code, the financial model that gets thrown away, the deck nobody will see. In exchange, they get exposure — to clients, to problems, to seniors thinking out loud. Slowly, over thousands of unglamorous hours, they absorb the tacit knowledge that turns information into judgement. The senior gets cheap leverage. The firm gets work done. The junior gets a career.
It is, viewed from the outside, an absurdly inefficient arrangement. We pay seniors enormous salaries to spend a meaningful chunk of their day giving feedback on first drafts that the senior could have written better in a third the time. We tolerate juniors making mistakes that anyone with five years of experience would not. We invest in people for years before they generate net positive value.
And it works. It worked for medicine, for law, for accounting, for engineering, for journalism, for architecture, for consulting, for software, for design, for almost every profession that depends on seasoned judgement. The inefficiency was the feature. We were paying for the production of expertise.
That contract just got broken — not on the senior side, but on the labour side. When AI can produce a passable version of a first-year associate’s deliverable in seconds, the economic case for hiring that associate quietly evaporates. And the market is doing exactly what economic theory says it should.
Entry-level postings in software development and data analysis dropped by roughly 60% between 2022 and 2024. A 2025 LeadDev survey of engineering leaders found 54% planned to hire fewer juniors specifically because AI copilots were letting their seniors handle more. Salesforce announced it would hire “no new engineers” in 2025. The Stanford team, looking at the most AI-exposed jobs across the entire US economy, found a 13% relative decline in employment for early-career workers in those occupations. Wages held; jobs disappeared. The labour market is adjusting through hiring decisions, not pay cuts — which is precisely why the adjustment is invisible quarter to quarter and catastrophic decade to decade.
Here is where almost everyone stops. The story is presented as a Gen Z problem, a tech-bubble correction, a passing storm. The bigger story is what comes next.
The pipeline you cannot see on the income statement
Senior expertise is not manufactured in a classroom. It is the residue of thousands of hours of low-stakes mistakes in low-stakes work. The associate who has read four thousand contracts develops a sixth sense for the one with a buried liability clause. The accountant who has reconciled a thousand inter-company ledgers can spot the kind of error that doesn’t even show up as an exception. The engineer who has debugged two hundred 3am production incidents can read a stack trace like sheet music. None of that comes from reading about it. None of it comes from watching someone else do it. It comes from doing it badly, many times, with consequences low enough to survive.
Harvard Kennedy School’s Project on Workforce calls this dynamic the “expertise upheaval,” and they argue that AI’s effect on the learning curve is its most important and least appreciated impact. When AI compresses the curve, it doesn’t just speed up training. It removes the substrate training was built on. You cannot develop judgement about an AI’s output if you have never done that work yourself. The judgement is what you get from doing the work. The work is the school.
“If the current generation of juniors never grapples with low-level problems because AI solves them automatically, they may never develop the deep intuition and tacit knowledge required for senior roles. By 2030, the industry may face a catastrophic shortage of true senior engineers and leaders — those capable of understanding the system below the AI abstraction layer”.
That block quote is from a synthesis of 2024–2026 employment data published by Rezi’s research team in early 2026. It is, as far as I can tell, the most honest thing anyone has written on the subject. We have no plan for how senior people will exist in 2034 if we stop investing in juniors in 2026.
The firms making this decision are not stupid. They are responding to incentives. A junior costs roughly £75,000–£120,000 per year fully loaded for the first three years before they generate meaningful value. A licence to a frontier coding assistant costs a few hundred dollars a month. The numbers, on a one-year planning horizon, are not close. Cutting junior hires this year is good for the income statement this year. It is also liquidating the capital asset — the apprenticeship system itself — that produced every senior currently sitting in the firm.
That asset doesn’t appear on the balance sheet. So nobody books the loss when it depreciates. Until, one day, they look around and the bench is empty.
The verification asymmetry: where the bottleneck moves
There’s a second, subtler dynamic worth naming. Even when juniors are hired into AI-augmented teams, what they’re being asked to do has changed shape — and the new shape is brutal.
The classical apprenticeship asked juniors to produce mediocre work and gave them a senior’s feedback to improve. The AI-augmented version asks juniors to evaluate AI-produced work that already looks polished, in volumes that previously would have taken weeks to generate. The skill demanded is verification, not production. And verification is harder than production, not easier.
To know that a contract clause is wrong, you have to have written enough contracts to feel the wrongness. To know that an AI-generated chart is misleading, you have to have built enough charts to recognise the lie. To know that an AI’s code is subtly off, you have to have written enough code to have intuitions about what good looks like. None of this comes free. And asking a 23-year-old to verify the output of a system designed to sound authoritative on every topic — including topics they have never personally touched — is asking them to build a kind of expertise we do not yet know how to teach in the absence of doing.
The result is a quiet but significant shift in where the work piles up. AI generates fluently and confidently. Juniors pass it through with whatever skepticism they can muster. Errors compound. Eventually a senior catches them — but the senior is now spending more time reviewing than they used to spend producing, and they are reviewing things they did not produce themselves. The bottleneck moves up the org chart. Which is, of course, exactly the bottleneck that “AI productivity” was supposed to remove.
This pattern shows up in domain after domain. Senior engineers report spending more time reviewing AI-generated PRs than they ever spent reviewing human ones. Senior consultants describe doing twice the QA work on decks their juniors built with AI. Senior writers find themselves rewriting more, not less. The narrative says AI lets seniors focus on high-leverage work. The reality, in many cases, is that AI lets seniors do verification at scale — which is necessary work, but it is not high-leverage work, and it is not what we were promised.
A taxonomy of responses
Look at any given firm and you will see one of four postures emerging in response to all this. They are not equally good.
The Liquidator. Cuts junior hiring aggressively, books the savings, claims AI productivity gains, ignores the long-term pipeline question entirely. Common in firms with short executive tenures and quarterly earnings pressure. The 2024–2025 wave of Big Tech layoffs hit early-career engineers disproportionately, and most of those firms have not announced any structural plan to rebuild the pipeline. They are betting either that AI will keep getting better fast enough that no junior pipeline is needed, or — more cynically — that this is the next CEO’s problem.
The Pretender. Continues to hire juniors at roughly the same rate, but reduces investment in their training because “AI will teach them.” This is the worst posture of the four. The juniors arrive, find no senior willing to spend mentoring time, fail to develop, and leave or are quietly let go after eighteen months. The firm congratulates itself on still hiring, while producing exactly zero new seniors.
The Restructurer. Recognises the apprenticeship is broken and tries to rebuild it explicitly. Ropes & Gray’s late-2025 “TrAIlblazers” pilot is the most public example: first-year associates are encouraged to spend 20% of their billable target — roughly 400 hours a year — on AI training and experimentation, with those hours counting toward their internal evaluations. It is an honest admission that the old model is dying and that someone has to pay to build the new one. Whether 400 hours of AI training a year produces the kind of judgement that 1,500 hours of document review used to is a separate question. But the firm is at least showing up for the conversation.
The Inverter. The most interesting and rarest response. A small number of firms — generally smaller, founder-led, long-horizon — are doubling down on junior hiring precisely because they expect a senior shortage in eight to ten years and intend to be the ones who have the people. They treat the senior bench as a strategic asset and the apprenticeship as their moat. If the Stanford data is even directionally correct, these firms will look extraordinary in 2034.
Most firms reading this will recognise themselves in one of the first two. The third is hard. The fourth is rarer still. But the choice is being made — actively, by inaction, every quarter that goes by without a deliberate position.
What gets lost when the apprenticeship goes
There is a temptation, when describing the apprenticeship, to romanticise it. It was often miserable. Junior bankers worked themselves into hospital beds. Junior associates billed eighty-hour weeks doing soul-crushing work. Junior consultants flew home Friday nights only to fly back Monday morning. Some of what is being eliminated is genuinely worth eliminating.
But the apprenticeship was never just labour extraction. It was, at its best, a transmission system. It transmitted technical skill, of course — but also taste, ethics, professional norms, the unwritten rules of how to handle a difficult client, how to push back on a partner, how to know when something is wrong before you can articulate why. It transmitted judgement. None of that comes through in a textbook, and very little of it comes through in a six-week onboarding. It comes through in the proximity of doing real work, watching real seniors handle real situations, and slowly — over years — developing the same intuitions.
The thing we are at risk of losing is not the document review. We can lose document review. We should lose document review. The thing we are at risk of losing is everything that used to come with document review — the proximity, the watching, the slow soaking-up of how this profession actually works in the parts that aren’t written down.
You can replace the labour with AI. You cannot replace the proximity with AI. Or rather, you can try, but the people who emerge from a fully AI-mediated training process will be a different kind of professional than the ones who came before — and we are about to find out, at scale, whether they are good enough.
Practical implications
For early-career people: stop waiting for an employer to invest in your judgement. They have less incentive than they have ever had. Build the verification skill explicitly. Pick problems where you do the slow, manual version and the AI version, and notice the delta — that delta is where your future taste lives. Find seniors and pay for their time if you have to. Mentorship is now a market good. Treat it like one.
For hiring managers: stop screening for the skills AI now does well. Screen for taste, judgement under uncertainty, and the willingness to do hard reps. The juniors who will be your seniors in 2032 do not look like the ones who became your seniors in 2022. They are people who can articulate why an AI output is wrong without immediately being able to fix it — that is the verification muscle, and it is the most important hire you can make right now.
For leaders: your future senior bench is a balance-sheet item that does not appear on your balance sheet. Cutting junior hires this year saves money this year. It also liquidates the apprenticeship that produced every senior you currently depend on. Rebuilding that — formally, with budget, with senior time explicitly allocated to mentorship the way Ropes & Gray has allocated billable hours to AI training — is the most strategic move available to you in 2026. If you do not make it, your competitors will. And in 2034, they will have the only people who can actually run the work.
The uncomfortable truth
We are running an experiment we have not consented to. We are removing the entry-level rung from every knowledge profession at the same time, on the unspoken assumption that AI will somehow produce its own seniors. It will not. AI gets better at what AI does. Humans get better at judgement by doing the work — including, especially, the work AI now does.
The apprenticeship was never inefficient. It was a transmission system. We are scrapping the transmission and hoping the wheels still turn. They will, for a while. They are turning right now, on the senior expertise we built up before all this started, and that expertise has perhaps a decade of inertia in it. Then it runs out.
The question for the next ten years is not whether AI is taking entry-level jobs. The question is who will be the senior partner, the staff engineer, the principal designer in 2034 — and whether anyone is willing to pay, today, for the slow, unglamorous, economically inefficient work that produces them.
If everyone waits for someone else to train the next generation, no one will.
We will look around in eight years and find the bench empty. We will be very surprised. We should not be.


