The AI Brain Heist.
Exploring what happens when we treat thinking as an inconvenience, outsource it to machines, and wake up one day unable to recognise the texture of our own mind.
The human species has always had a profound sense of self-preservation. Paradoxically, the fear of the unknown has pushed us forward, not inward.
Fire, the printing press, electricity, the microprocessor, the Internet are tangible game-changing examples of what humans can create to better societies or diffuse knowledge. With the impact many of us anticipate it will have, Artificial Intelligence certainly deserves a spot on that list, but the jury is still out on the benefits–drawbacks ratio. Without waiting for other use cases to gain traction, it is safe to say that this technology will profoundly alter one of our most human characters – the ability to think.
Let’s dive in.
The path of least resistance
Mother Nature has gifted humans with many amazing features that have propelled us to the top of the food chain – on planet Earth, that is. Despite our insatiable appetite for…laziness. Indeed, in an evolutionary sense, human cognition is fundamentally lazy. For most of human history, humans had to survive: saving energy was not an option. It was a way of life. Thinking deeply, critically or even abstractly was a luxury given the energy required for the intellectual effort. Shortcuts of all kinds were features to celebrate.
Fast forward a few thousand years: the wiring has not changed. The environment has. We are now faced with the smoothest cognitive shortcut ever created: AI – and GenAI in particular. If you are reading this, you know the drill: why struggle with a difficult concept if a model serves you an answer in the blink of an eye? Why activate your thinking if a chatbot spits out a convincing argument?
We once had a survival mechanism in the form of creative shortcuts, but these were the results of micro-struggles. With Generative AI, thinking becomes optional and expendable for the brain.
In other words, the danger is not that AI makes us dumb. It is that it gives our biology exactly what it wants, at the right time, when we should aspire for the opposite.
The myth of effortless intelligence
To illustrate my point, I will make a somewhat audacious comparison with outsourcing here. In the business world, outsourcing is a common practice. As an outsourcer, the logic implies that you’re giving away control over an activity that other companies do better for cheaper. With AI – GenAI, more clearly – we are drawing up an outsourcer–outsourcee contract. As the technology makes inroads into our professional lives, we are starting to behave as if intelligence is something that can be outsourced. Proud as we are, we retain the title of “thinker,” but it’s an illusion at scale: AI writes, rewrites, summarises, analyses; we merely curate – and even there, only occasionally. Curation is not cognition. It’s selection.
And just like with many other activities, the absence of practise leads to erosion. If AI handles the cognitive heavy lifting, we begin to forget what heavy lifting feels like. We get:
The answer without the uncertainty
The insight without the struggle
The clarity without the ambiguity
The problem is that human intelligence is forged in that struggle. What is commonly misunderstood is that effort is not a tax on intelligence. It is intelligence. Through mentally taxing, uncomfortable effort, we build mental maps, establish distinctions, turn raw information into durable understanding.
When we skip the effort, we skip the wiring. We unwire intelligence.
And with effort comes the notion of stamina for deep thinking. As the AI democratisation process continues with all its current flaws, we don’t want to sit with complexity any longer. Complexity is so 20th century.
Yes, effortless intelligence is appealing. I even find it surprising that no big tech company has yet constructed an ad campaign around thinking laziness. For now, it’s still for the greater good of humanity, but I bet marketing gurus have already been tempted to twist this conventional narrative. When it happens, the era of complete algorithmic compliance may just start.
The high cost of outsourcing judgement
We may have decided to retain the title of thinker, but keeping a human-in-the-loop is far from enough if we want to sanctuarise human intelligence.
The opposite phenomenon is actually happening: the more we rely on AI to pre-think for us, the more our internal judgement circuitry atrophies.
The key here is that judgement is not knowledge. Judgement is the ability to assess competing realities, appreciate nuance, identify patterns that are not explicit. It is also the ability to sense when “something does not feel right.” Judgement is everything most of us do not relish doing: it is slow and effortful. It is built and strengthened through friction.
With AI removing the friction, we get answers without any idea of the perspective being taken. Certainty affirms itself to us, without instilling any doubt in the process.
Over time, consequences are clear: we will get worse at evaluating the outputs we depend on. In business decisions but probably personal ones, too.
AI is not tricking humans. Chatbots have no consciousness, although I know some people will argue that they do. The real danger is that we soon will stop being able to tell when we are being tricked. Judgement is like a muscle: unused, it weakens. Outsourced, it simply collapses.
If we start observing reality through AI as a primary filter, we will lose the very human capacity to interrogate reality ourselves – the very capacity that allowed us to create breakthrough inventions. Take gravity with Isaac Newton, penicillin with Alexander Fleming or electromagnetism with Michael Faraday, just to name a few.
The cost of over-reliance? Huge. Immense. Unbearable. Without strong personal judgement, humanity becomes algorithmically compliant: intelligent enough to use and operate tools, but not independent enough to question them. And without asking ourselves these questions, there can’t be real progress.
Human intelligence degrades not through stupidity, but through too much convenience.
That’s the paradox of the AI era. Intelligence will not be scarce, but human intelligence may be.
AI will keep getting smarter, with LLMs. It will keep getting more relevant, with SLMs. It will become more ubiquitous, via world models. That part is inevitable.
What is not inevitable is whether we keep getting smarter alongside it. If anything needs protecting, it’s not our jobs. Not our routines. It’s our cognitive agency – this human ability to think slowly, creatively, originally, independently, even when shortcuts exist all around us. In the business world, the everything-now era is unfortunately pushing many of us to go down the path of least cognitive resistance.
Truth be told, we need voices to talk about these parts of intelligence where discomfort, unease and discipline are required. One day, we will wake up with an abundance of answers, and the inability to question them, to discern them, to assess them.
Human intelligence does not need protection. It needs participation.


