The Algorithmic Glass Ceiling.
Exploring why AI is becoming the new corporate gatekeeper, and what humans must do to keep originality alive.
You want to give your best for this interview. You did your due diligence. You prepared your pitch. You even anticipated some curve balls.
You seek to give your best at work. Almost every day. You are relentlessly chasing the best outcome for your company, your team, your boss, yourself, your self.
Your behaviour? Exemplary, according to this former manager.
Your commitment? Exemplary, for this former teammate.
Your seriousness? Exemplary, judging by what your former client says.
And yet. You have been let go. Either formally, under the pretext that “you are not a good fit” or informally because you are forever stuck in your current role.
With AI making quick strides into the workplace, things may just get worse. A lot worse.
Is that unavoidable?
Let’s dive in.
The invisible threshold: when algorithms become gatekeepers
To make progress within your company -or simply to be recognised by your colleagues- you have spent a lot of time understanding its corporate mechanics. You have adjusted your attitude, your tone, your body language. Still, you have been hired for a reason. Supposedly, you have been hired because you’re…you. So, you are managing a delicate balancing act, between these adjustments and the need to stay true to yourself.
You are naturally inclined to please your manager by delivering reliable, consistent, regular updates and solutions.
This is where you are wrong.
Your work is increasingly being filtered, scored, prioritised and surfaced by algorithms. Sometimes in very unnoticeable ways, like this email summary popping into your manager’s inbox.
Implications are running deep.
In the everything-now era where time is, more than ever, money, summary equates substance.
Human-generated gossips -already driving promotions, stagnations and demotions in some workplaces- are now being replaced by their digital equivalent: AI-generated noise. In both cases, the signal is distorted, hidden and annihilated.
In other words, AI is the new intermediary between you and your leadership. What we knew at the hiring stage is now true at the collaborating stage as well: an AI platform suggests, or even decides, who gets to stand out.
The result?
An invisible threshold: if your work does not align with algorithmic preferences, it does not get seen.
A new segregation system: a brilliant employee whose unconventional writing gets penalised by AI summarisers, while average, AI-polished submissions rise to the top.
Your talent now must have profound algorithmic taste to still be called talent.
The performance mirage: when fluency with AI outweighs originality
At this pace, we are entering a world where performance is not measured, but rendered. In this new environment, AI-fluent workers stand to gain a disproportionate performance advantage.
They may not think better. They may not have deep insights. But their output looks better to the systems evaluating them. They have an edge in playing the algorithmic game well. And because these systems optimise for consistency over insight, they quietly train entire teams to value what looks right over what is right.
Is that elevating company performance?
The answer is a resounding no. The algorithmic bias pushes back to the mean. The median pattern. Dullness. Mediocrity. Extrapolate this phenomenon across the organisation and you have a company where risk and originality are discouraged. Because only “the norm” gets algorithmically rewarded.
The company’s creativity EKG suddenly gets flat. And guess what. In many industries, this has material business impact. Take innovation, for instance. It is required everywhere, regardless of the nature of the sector, to generate product improvements or operational gains. Deliver less, and someone else will take your spot in the value chain. Deliver slower, and someone else will swap position with you in a jiffy. In that kind of competitive pressure cooker, a workforce trained to avoid algorithmic deviance becomes a strategic liability. These are the rules of trade, they are not always fair, but at least everyone knows them.
Reversing this line of reasoning, this means that inventiveness, creativity and originality -or if even friction, one of their common denominator- still have a corporate future.
This could help delineate the complex human-AI collaboration model at work.
Breaking the algorithmic ceiling: human traits that bend the system
Because one must confess this model is still being shaped.
In a very humanly fashion when a disruption is introduced, we start by expressing radical views. AI is no exception: today’s views on the subject are dichotomous. You hate AI, or you love it. But polarisation clouds judgement, and it prevents organisations from asking the only question that matters: where does AI genuinely create value, and where does it quietly dilute it?
In times where noise easily weeds out signal, it is a challenge for human-centric technologists to define the most advantageous position for AI at work –a position in which the good use cases are amplified and the bad ones regulated out or excluded.
Let me give it a try, though. If I were to draw up a sustainable human-AI collaboration model, it would follow a basic yet effective approach on two levels:
At a company level, 3 necessities for leaders:
Acknowledge that AI cannot work without humans, and that the opposite is not true. The “people are our greatest asset” ultra-bland tagline -and its associated variations- needs to be rejuvenated by reintroducing human review in the performance evaluation process.
Reward algorithmically invisible work. Just to name a few, judgement, mentorship, dissent, synthesis are all viable skills that keep companies going.
Update their performance systems accordingly to measure thinking, not formatting. Some organisations are moving away from a rigid number measurement to more qualitative metrics. In a performance review, hard metrics should still be assessed but should not carry a heavier weight than the softer metrics showing how the employee tried to achieve it.
At an employee level, 3 shifts:
Shape AI tools by reframing prompts, customising outputs and overriding defaults. This is harder than it looks as it goes beyond simple AI literacy. It is about pausing, analysing and deciding consciously. Not every AI output is worth your attention.
Inject non-conformity. As mentioned earlier, you have been hired because you’re you. So be you with AI and don’t let AI speak on your behalf. Your personal insights, your lived experience are irreplaceable assets that can be magnified by the proper use of a chatbot.
Build hybrid outputs. Easier said than done, for sure, because it implies more effort. A half-human, half-AI output will always be more authentic, even with its imperfections. Even with some flaws.
Today, the conditions are not met for this approach to take root. The AI journey in corporate organisations is still too immature. There is undeniable excitement from shareholders to see yet another cost-reducing technology being rolled out. There is still less excitement to get the human-AI collaboration model right.
This is why the current transition phase is so precarious: organisations are adopting AI faster than they are updating the cultural and structural safeguards that should accompany it.
If we are not careful, the most dangerous ceiling in the modern workplace won’t be structural or political.
It will be invisible, automated, and mathematically justified. When algorithms decide what is “relevant,” “useful,” or “high-signal,” they begin to decide whose thinking deserves to be seen. And once visibility becomes machine-mediated, originality no longer competes on merit. It competes against statistical patterns designed to minimise surprise.
Human originality becomes collateral damage in a system optimised for efficiency rather than discovery. The only real safeguard is intentional friction: leaders who reward unconventional thought, teams that challenge machine-filtered consensus, and individuals who refuse to delegate their intellectual edge to automation.
Protecting originality is no longer a romantic ideal; it’s fast becoming a strategic necessity for both individuals and organisations.



Fantastic breakdown of how AI is reshaping workplace visibility. The idea that our outputs are getting pre-filtered before even reaching decision-makers is kinda unsettling but spot-on. Ive seen it happen in performance reviews where AI-generated summaires basically erased nuanced context from original work. The suggestion to inject nonconformity and build hybrid outputs is practical advice most people wont hear loud enough.