The Originality Crisis.
Exploring the quiet decline of human originality in an AI-accelerated world.
When the first movie was projected, the experience was so novel that it startled spectators. The people in attendance were not simply watching moving images: they were living a cognitively engaging moment – and undoubtedly an immersive one.
When the first black and white televisions appeared, the experience became more passive as the expectations were known. Yet, our imagination was still put to work. We could just assume what the colours were, what the atmosphere was like, find a 3rd point. It was a mind-blowing technological breakthrough, but one of these instances where technology still forced us to be intellectually present.
Then came colour TVs. The Internet. Mobile devices. More convenience all around, sending the human passiveness stock to all-time highs.
Then came AI. Generative AI. Sending the human originality stock to record lows.
What are the implications in the workplace? Is this trajectory inevitable?
No friction, no formation.
For centuries, human originality has always been forged in friction. This ability blending curiosity and creativity has certainly allowed us to survive as a species. Think of this mental resistance created when you don’t know or can’t recall something. Think of this moment when you must figure something out from first principles. The transaction between your brain and your objective is not smooth. As your body temperature possibly heats up through the difficult moment, you may not enjoy the experience. And yet before you know it, you have built up cognitive stamina, intellectual depth. You have formed a distinctive view - yours.
Decades of technological progress have been making our lives easier and our work faster. It’s been mostly about erasing intellectual challenges. Generative AI certainly represents the culmination of these efforts to collapse the friction curve.
You need to write something. Do you want to stare at a blank page for hours?
No, you bypass the struggle and turn to your favourite chatbot.
You need to understand a concept. Do you want to painfully read through them and how they connect?
No, you skip the effort and ask a summary to your model of choice.
You need to synthetise a report. Do you want to read the report in the first place?
No, you avoid the challenge and beg your AI partner of choice to do the heavy lifting on your behalf.
We now get the output of thinking without the work of thinking. And it’s a slippery slope.
With our brains, we possess some seriously smart technology. For millennia, they’ve been adapting themselves to drastically different circumstances. GenAI is merely an obstacle for them: they can easily adjust their behaviours to the additional convenience. Give brains an easier path, they’ll gladly follow it. Give them more convenience, and they’ll expect it to be the new normal.
When the starting point is always provided, we forget how to construct one.
When ambiguity disappears, so does our ability to navigate it.
When mental effort becomes optional, the ability to generate original thought becomes exceptional. Worse, it becomes rare.
Consequences in the workplace are straightforward: polished deliverables that lack conceptual depth, strategies that look interchangeable, and teams that converge around the median idea because the model’s first draft sounds good enough.
Technology didn’t make us less intelligent. It is simply removing the conditions under which intelligence develops. It would be wise to be aware of that fact to build a future in which humans and AI coexist. Otherwise, workplaces will become places of mere reaction.
More reaction, less origination
In fact, you may already be witnessing this evolution: the workplace is being split between colleagues who react to machine-generated content, and those who originate their own.
The first category is the reactive workers, the second the originators, and they are strikingly different:
Reactive workers begin every task by asking their favourite AI chatbot for a first version.
Originators begin every task by preserving the first few minutes of cognitive struggle.
This behaviour has short-term consequences:
For reactive workers, the chatbot’s output becomes their cognitive anchor. They spend time editing, reviewing and curating. The machine shapes their thinking. Shapes them.
For originators, the frame is theirs. They draft it. They frame questions before seeking the answer, with AI as an accelerator, not a starting point. They shape their thinking with the help of a machine.
Which then can lead to longer-term implications:
Reactive workers never build their own mental models. Their output sounds good, but the intellectual authorship and depth is, at best, questionable.
Originators become problem solvers, by defining clear boundaries before engaging with AI.
As AI is being adopted by the workplace, you may not notice anything. But if the distinction feels subtle today, it will become seismic.
Enlarge the aperture from employees to employers, and you stare at a bleak perspective. Organisations that rely too heavily on AI without nurturing and protecting original thinkers will end up with:
Strategies that seem to come from the same ghostwriter
Teams that optimise, instead of rethink and reframe
Cultures that become dull because nobody even thinks of challenging the default - the default seeming “smart”, after all.
Leaders who confuse fluency for insight
Originality in the workplace may be declining, but it is not because humans lost creativity. It’s because the workplace rewards editing over thinking.
Is there a more appealing outlook?
Yes. When everyone has access to the same AI chatbots, value shifts upstream.
Not in output, but in interpretation.
Not in speed, but in clarity.
Not in drafting, but in defining.
Not in editing, but in owning.
Think, pause and go
If AI automates intelligence, the only remaining advantage you have is your own agency: the ability to shape your own thinking rather than inherit it from a model. Cognitive agency is the modern version of craftsmanship. It’s the active mental stance of choosing your inputs, your frames, and your starting point instead of passively accepting what a system provides.
The good news: agency can be protected. The better news: agency can be trained.
But only with intention because AI makes passivity feel productive.
I certainly don’t have all the answers for you to reconquer what was yours, but let me craft a path forward based on my own experience:
Start by framing your thoughts. Just you, yourself and the thinking marvel that sits in your brain. Write 2 statements describing the problem you need to solve or the action you must take. Resist the urge of prompting. Your self-esteem will thank you later. With these anchors, you own the frame and will control the conversation with any model.
Leverage a model to challenge your thinking. Conversational AI needs to be what it is: a conversation. You lead, the model follows.
Treat AI outputs as hypotheses, not answers. This phase is probably the hardest. It is asking you to adopt a verification mindset. Questions like “what would make this wrong?” Or “what perspective is absent here?” Are 2 good examples to use. This phase of back-and-forth solidifies judgement skills, which atrophy when unutilised.
Regaining ownership of our thinking does not happen just when we improve our ways of interacting with AI models. This process also happens beyond the confines of a human-machine interaction, and it revolves around two axes:
Input curation to to build originality: our ideas are downstream of our information diet. If we consume the same inputs as everyone else - tweets, summaries, medium-form thinkpieces - we will produce the same ideas as everyone else. Depth requires asymmetry of input: books, long-form, primary sources, and lived experiences.
Productive discomfort into everyday workflows: it’s worth reintroducing short periods where we think without tools. When sketching, mapping, listing questions, we will be rebuilding the cognitive stamina AI erodes. What may seem inefficient in the everything-now era is actually generating wonderful benefits for the clarity of our thoughts and the sharpness of our insights.
If the last era rewarded execution, the next one will reward discernment. AI will keep getting faster and smarter. That part is inevitable.
What is not inevitable is whether humans become passive beneficiaries or active thinkers. The companies — and individuals — who win will be the ones who refuse to let their originality collapse under the weight of convenience.
Not by rejecting AI, but by using it without surrendering the one advantage no machine can replicate: the freedom to think independently.


