Merit Was Never The Point.
Exploring why merit has always been secondary to visibility, narratives, and power, and why AI is making that truth harder to ignore.
Merit. A beautiful word encapsulating the notions of grind, discipline, struggle, effort and resilience all at once. An iconic word. One of the first ones that come to mind when you want to praise others. The first metric we use to measure ourselves as we set our goals for the year ahead.
Merit in the workplace? A myth. A mirage. A trick. The kind of illusion that deludes you, particularly at the beginning of your career: “if I work hard, if I deliver above expectations, I will give myself chances to climb up the ladder”, your younger self may have told you. The ensuing maze of business imperatives likely disappointed you.
I don’t know if corporate meritocracies truly exist, but the rise of AI is exposing the truth about merit.
Let’s explore.
Merit was always mediated
The idea that work is rewarded purely on merit is a fallacy that survived decades of counterexamples.
Visibility, timing, and proximity to power have always shaped outcomes. They have always determined promotions. In a human workplace, merit is mediated by humans. And the results of this mediation are predictable: they are messy, contestable, unfair sometimes. Any attempt at creating some correlation between your impact and your corporate elevation is vain: there is none.
With AI, the mediation does not go away. Worse: it scales and it becomes opaque.
You may not have liked the messiness or ambiguity of human decisions. You may not like AI-based decisions either. Algorithmic mediation is silent, statistical and harder to challenge. It filters out all the little things you do to make work happen: the extra hours to polish that presentation, the countless attempts to book this coffee chat with a decision maker, the diplomacy required to handle a rather rude email response from a naysayer. Algorithmic mediation hides the grit. It hides merit.
Automation hardens the illusion
Let’s be fair to AI: it does not kill meritocracy. It is just making the myth more convincing.
Decisions feel objective because they are automated. Scores feel neutral because they are numerical. Rankings feel fair because they are consistent.
But before we hand over the keys to HR algorithms in 2026, let’s pause and reflect:
Consistency is not justice. Being wrong 100% of the time is consistent, but it isn’t fair.
Optimisation is not understanding. An algorithm can optimise for speed or clicks, but it cannot understand intent or nuance.
Prediction is not potential. Algorithms look backward at data to predict the future. They cannot measure your capacity to grow, pivot, or surprise.
The AI-powered corporate system rewards patterns that are easy to recognise. It does not value genuinely valuable, and it certainly struggles with edge cases.
For example, you may have lost a business opportunity due to budget cuts. The algorithm logs this as a “Loss.” But your ability to earn the client’s trust, your willingness to actively listen to their pain points, and your determination to articulate a customised value proposition may have secured them for the next opportunity.
You don’t know it yet, but you have secured future revenue.
Sadly, the invisible dedication you showed is ignored by the code. The system does not praise future earnings; it only praises past patterns. The messiness of human effort is being flattened into a data point, magnified by a model.
Legibility, the new advantage
If the system rewards patterns, you might wonder what’s needed to achieve success in the workplace of the future without losing your soul to a machine.
I have already written a lot about cognitive agency, and that will become a fundamental success driver at work. An interesting extension of this agency is found in legibility – the ability to make one’s contributions legible to machines. As a matter of fact, if you control your own thinking, you will retain control over your interactions with AI, subjecting the chatbot to your requests rather than the other way around.
Legibility is best supported by the idea of friction in man-machine interactions. In fact, for demanding tasks, slowness with AI boosts effectiveness. It increases accuracy. It elevates clarity. It is counterintuitive to many, and yet, it is a recipe only a few apply to great success.
There was a digital divide. It will make way for a clear cognitive divide between:
Those who adapt their expression
Those who refuse to flatten themselves
At this stage of model development, it is hard to conceive of a chatbot that could deliver original insights. The dots that LLMs connect through their responses are, somehow, already connected.
While that lasts, it means originality remains a profoundly human trait. And it is an opportunity: through intelligent prompting, this presumably messy and unstructured originality can be clarified and magnified by a model.
This leaves us with a stark realisation: trying to make your work “readable” to these systems often means stripping away the very nuance that makes it valuable. The machine rewards standardisation, but your career is built on differentiation.
Don’t fall into the trap of becoming a dataset just to be seen. The “inefficiency” of building trust and relationships is not a bug to be optimised away; it is the only competitive advantage that remains.
So as we head into 2026, stop optimising for the algorithm. It won’t love you back. Optimise for the humans who can still see the invisible.


