<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Shaping Minds]]></title><description><![CDATA[Shaping Minds is where I reflect on what it means to grow, adapt, and stay human in a technology-driven world and constant change.]]></description><link>https://www.shapingminds.co</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 10:27:33 GMT</lastBuildDate><atom:link href="https://www.shapingminds.co/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Shaping Minds]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[shapingminds@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[shapingminds@substack.com]]></itunes:email><itunes:name><![CDATA[Maxime Mouton]]></itunes:name></itunes:owner><itunes:author><![CDATA[Maxime Mouton]]></itunes:author><googleplay:owner><![CDATA[shapingminds@substack.com]]></googleplay:owner><googleplay:email><![CDATA[shapingminds@substack.com]]></googleplay:email><googleplay:author><![CDATA[Maxime Mouton]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Taste Gap.]]></title><description><![CDATA[Exploring how AI's collapse of production costs has flipped the scarce resource from output to discernment, and why the environments that used to build taste are the ones being automated away first.]]></description><link>https://www.shapingminds.co/p/the-taste-gap</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-taste-gap</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 05 May 2026 23:00:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!cJrK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!cJrK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!cJrK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!cJrK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:127795,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/194580229?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!cJrK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!cJrK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbd570db4-582e-4c79-9960-920245219714_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The workslop economy</h3><p>In September 2025, Harvard Business Review published a number that should have terrified every knowledge-work organisation in the western economy: 40% of desk workers had received AI-generated &#8220;workslop&#8221; in the previous month. Content that looked polished but lacked substance. Decks padded to look complete. Memos with the shape of analysis but no spine. Reports where the conclusions had been generated before the evidence.</p><p>The average worker spent 3.4 hours per month cleaning it up &#8212; triangulating sources, re-running numbers, rewriting sections that only sounded finished. For a 10,000-person company, HBR calculated the cost at $8.1 million per year. Two months later, Merriam-Webster quietly marked the moment by naming &#8220;slop&#8221; its 2025 word of the year: &#8220;low-quality AI-generated content flooding online spaces&#8221;.</p><blockquote><p><strong>But the dollar figure understates the problem. The cost isn&#8217;t the hours. It&#8217;s what those hours required.</strong></p></blockquote><p>To clean up workslop, you need taste.</p><p>You need the ability to look at a shiny-looking output and feel &#8212; in the prose, in the structure, in the argument &#8212; where it has gone wrong. You need a calibrated sense of what a good memo reads like, what a coherent deck argues, what a deliverable that earns trust actually contains.</p><p>And here is the trouble: most organisations are operating with less taste than they had five years ago. Not more.</p><p>This is the most important thing happening in knowledge work right now, and it barely has a name.</p><p>Let&#8217;s call it the Taste Gap.</p><div><hr></div><h3>The abundance flip</h3><p>For every generation of knowledge workers before this one, the scarce resource was production. Could you write this quickly enough, with enough polish, at the required volume? Could you design it, code it, model it, illustrate it? Time, skill, and raw output capacity were the bottleneck.</p><p>That bottleneck is gone.</p><p>In 2026, a moderately-skilled practitioner with the right tools can generate, in a single afternoon, what used to be a fortnight of work from a well-staffed team. Decks, research briefs, first-draft strategies, landing pages, prototype code, visual identities, internal memos, data summaries &#8212; all close to free. Not perfect, but close enough that the variance between &#8220;good&#8221; and &#8220;mediocre&#8221; is no longer bridged by doing the work. It&#8217;s bridged by knowing what good looks like.</p><blockquote><p><strong>This is what I&#8217;ve come to call the abundance flip. When you can generate anything, the only remaining question is what&#8217;s worth committing to. And that&#8217;s a taste question. Not a production question.</strong></p></blockquote><p>The designer and writer at Designative put the shift crisply:</p><p>&#8220;Taste is the judgement that operates when options are abundant &#8212; when many solutions are technically viable, data-backed, and defensible. It&#8217;s what allows teams to discriminate between them, to explain why one direction deserves commitment while others do not.&#8221;</p><p>For a workforce trained primarily to produce, this is an unexpected pivot. We optimised for output for a century. Taste was the thing you picked up informally &#8212; from the partner who&#8217;d return your draft covered in red ink, from the VP who&#8217;d kill your concept and tell you why, from long hours in review comparing five takes to pick the one. Taste was a side-effect of production. An apprenticeship dividend. It was never the job itself.</p><p>Now it&#8217;s the job. And most of us are underqualified.</p><div><hr></div><h3>What taste actually is</h3><p>Before we go further, it&#8217;s worth being precise, because &#8220;taste&#8221; is a word that has absorbed too much mystification.</p><p>Taste is not a vibe. It isn&#8217;t subjective. It isn&#8217;t &#8220;knowing what you like.&#8221;</p><blockquote><p><strong>Taste a learnt sensitivity to context, audience, and consequence, developed through prolonged exposure, critique, and revision. </strong></p></blockquote><p>Nielsen Norman Group calls it a decision-making skill. Anders Ericsson, the psychologist who founded the field of deliberate practice, would have recognised it as the output of thousands of cycles: attempt, feedback, reflection, refinement. The research on expert performance is clear: experts aren&#8217;t born with taste; they&#8217;re built through mentored repetition in high-feedback environments.</p><p>You can split taste into four working varieties, each at a different stage of decay:</p><ul><li><p><strong>Contextual taste</strong> &#8212; knowing what&#8217;s right for this audience, this organisation, this moment. The instinct to recognise that a deck that would slay in Amsterdam will die in Tokyo; that the Friday-afternoon email wants a different register from the Monday-morning one.</p></li><li><p><strong>Editorial taste</strong> &#8212; structural judgement. Knowing what to cut, what to emphasise, what to reorder. Feeling when an argument has a hollow middle, or when the second paragraph is doing the work the first should have done.</p></li><li><p><strong>Aesthetic taste</strong> &#8212; sensory judgement. Knowing what reads right, what sounds right, what looks right. Not &#8220;pretty&#8221; but calibrated. The reason two versions of the same deck provoke different reactions even when the content is identical.</p></li><li><p><strong>Strategic taste</strong> &#8212; discernment about what&#8217;s worth doing in the first place. Which problems are actual problems. Which questions are worth asking. Which bets are worth making. This is the highest-stakes form of taste, and the one AI has least access to, because it&#8217;s fundamentally a question of what matters, and AI has no stake in what matters.</p></li></ul><p>All four are degrading as we outsource the practice that built them.</p><div><hr></div><h3>The apprenticeship vacuum</h3><p>Here&#8217;s the most uncomfortable part.</p><p>Ira Glass &#8212; the radio producer &#8212; famously articulated what he called &#8220;the gap&#8221; for creative beginners: people enter a field because their taste is already sophisticated. They can tell good work from bad. Their problem is that their output doesn&#8217;t yet match their taste. That&#8217;s the gap.</p><p>His advice was the only advice that has ever worked: do a huge volume of work. Put yourself on deadlines. Accept the discomfort of producing things you know aren&#8217;t yet good. Eventually, your output catches up to your taste.</p><p>Two decades later, we are watching an inversion of that problem unfold in real time. AI is doing the production. Beginners don&#8217;t have to sit in the gap any more. They don&#8217;t have to push through the discomfort. They don&#8217;t have to produce ten bad decks in order to internalise, viscerally, what a bad deck is and why.</p><p>This sounds like progress. It is a catastrophe for taste formation.</p><p>Taste does not form by consumption alone. You don&#8217;t get it by reading great work; you get it by trying to make great work, failing, comparing your output to the best of the field, and feeling &#8212; physically, uncomfortably &#8212; where your attempt fell short. You get it through a ten-year cycle of production-feedback-revision-production. The work itself is the training set.</p><p>And the work itself is exactly what we are liquidating:</p><ul><li><p>The junior analyst who used to spend eighteen months pattern-matching across hundreds of client decks? AI drafts the deck now. She never sees the hundred decks.</p></li><li><p>The associate designer who used to generate fifty variations of every logo mark? AI does it in thirty seconds. He never develops a feel for the shape of what works.</p></li><li><p>The editorial assistant who used to read two thousand submissions to find forty good ones? AI pre-filters. She never builds the eye.</p></li><li><p>The new partner who used to sit in every pitch meeting, absorbing how senior partners chose and cut and defended? Those meetings are now abbreviated or auto-summarised. He never sees the cuts that mattered.</p></li></ul><blockquote><p><strong>We&#8217;ve eliminated the apprenticeship without naming what we&#8217;ve eliminated.</strong></p></blockquote><p>The production work was never just production. It was the scaffolding on which taste was built. Remove the scaffolding and you don&#8217;t get taste more quickly. You get taste not at all.</p><p>Call this the Apprenticeship Vacuum. It is the defining risk of the next decade of knowledge work, and almost no one is managing for it.</p><div><hr></div><h3>The calibration crisis</h3><p>A second, quieter problem runs parallel to the first: we are losing our sense of what &#8220;good&#8221; even means.</p><p>When every deck looks competent, competence loses its signal. When every email reads polished, polish becomes noise. The reference points that knowledge workers once used to calibrate their own standard &#8212; that deck from a senior partner, that memo from the CEO, that essay you remembered a decade later &#8212; are drowning in a sea of adequately-produced everything.</p><p>This is what the &#8220;AI slop&#8221; discourse is really about. It&#8217;s not that AI output is uniformly terrible. Most of it is mediocre-to-decent. The problem is that mediocre-to-decent is now the ambient baseline. Our sense of &#8220;great&#8221; is eroding because we can no longer easily find the edge cases that used to anchor it. The peaks look lower because the valleys have risen.</p><p>Europol has projected that by the end of 2026, as much as 90% of online content may be synthetically generated. Even if you discount that number significantly, the directional truth holds: we are about to live in a world where most of what we read, see, and evaluate at work was produced by systems with no stake in any of it. Calibration under those conditions is not automatic. It requires effort.</p><p>Organisations used to run on implicit calibration. Reviews, edits, critiques &#8212; these transmitted, week by week, what the house standard was. When that process is automated or abbreviated &#8212; &#8220;the AI can redraft it&#8221; &#8212; the calibration stops happening. Teams drift. Standards don&#8217;t fall all at once. They fall one unreviewed deliverable at a time, for years, until one day a senior leader opens a deck and doesn&#8217;t understand why it feels so hollow, even though every box is ticked. By then, the people who would have told them why are five years gone.</p><div><hr></div><h3>A counter-argument, honestly considered</h3><p>&#8220;Every new tool triggered this panic,&#8221; the sensible person says. &#8220;Photography was supposed to kill painting. Calculators were supposed to kill arithmetic. Spell-check was supposed to kill spelling. None of it happened. People adapted. Taste migrated. Why should this be different?&#8221;</p><p>It&#8217;s a fair challenge and worth answering directly.</p><p>The honest answer is: the earlier tools removed discrete, bounded capacities. A calculator does long division. A spell-check checks a word. Each replaced one small layer of cognitive work, leaving the surrounding judgement largely intact. You still had to decide which equation to set up, which sentence to write, which argument to make.</p><p>Generative AI is different in kind, not degree. It removes the whole surface between initial intent and finished artifact &#8212; including most of the middle-skill judgement calls where taste is forged. A junior designer using Photoshop in 2010 made hundreds of micro-choices per day: font weights, kerning, colour relationships, negative space, hierarchy. A junior &#8220;designer&#8221; using a generative tool in 2026 may make a handful of prompt-level choices and pick from four options. The volume of calibration reps per day has collapsed by an order of magnitude &#8212; and it&#8217;s the reps, not the output, that built the designer.</p><p>That is what makes this particular substitution dangerous in a way that calculators never were. We are not removing a tool. We are removing a gym.</p><div><hr></div><h3>The discernment dividend</h3><p>There is, however, a bright side hidden inside this &#8212; and the organisations that find it first will own the next decade.</p><p>Taste is getting scarcer, and scarcity prices value. The Discernment Dividend is the compounding economic premium accruing to people and organisations with calibrated judgement in a world where everyone else can produce but fewer can discriminate.</p><p>Signs of it are already visible. Editors are being paid more, not less, in AI-saturated publishing. Senior designers command higher multiples over juniors than they did in 2022. &#8220;Curator&#8221; roles &#8212; people whose sole job is to choose and defend &#8212; are appearing in product, publishing, and learning organisations. The creator economy is quietly bifurcating between high-volume generators (low margin, low defensibility) and taste-driven brands (high margin, fiercely defensible).</p><p>This is the Discernment Dividend starting to show up in pay packets. It will accelerate.</p><div><hr></div><h3>Practical implications</h3><ul><li><p><strong>For early career:</strong> your production ability no longer differentiates you. Your taste does. Treat taste-building as the core of your first decade, not a by-product of it.</p></li></ul><p>Consume excellent work constantly &#8212; not passively, but analytically. Why is this piece good? What decisions did the writer make? Where would a worse version have drifted? Keep a private file of work that moved you, and revisit it. Make notes on what specifically landed.</p><p>Seek critics. Find the person in your organisation whose taste you most respect and ask them to shred one piece of your work every month. Do the work AI can&#8217;t yet: original hypotheses, unexpected framings, critique that takes a risk.</p><p>And do some work by hand, sometimes. You will be slower. You will be right less often. You will learn what it feels like to struggle with a problem &#8212; which is the only way taste gets installed.</p><ul><li><p><strong>For mid-career:</strong> you are at the most dangerous inflection of your career. Your taste is partially built. Your role is being restructured to lean harder on AI. You will be tempted to coast on the taste you already have while AI handles the execution.</p></li></ul><p>Don&#8217;t.</p><p>Taste is a muscle. It atrophies. The professionals who will matter in 2035 are not the ones who optimised for AI-assisted output in 2026. They are the ones who kept showing up to the work where taste is tested &#8212; live critiques, genuine disagreements, decisions under real stakes. Resist the drift toward being a &#8220;reviewer of AI drafts.&#8221; You will degrade into it if you&#8217;re not careful.</p><ul><li><p><strong>For hiring:</strong> stop screening for production skills. Everyone&#8217;s writing samples look good now. Everyone&#8217;s portfolio is polished. Screen for discernment. Show candidates three pieces of AI-generated work and ask them to rank and defend. Present a flawed strategy and ask what they&#8217;d cut and why. The person who can articulate why one version is better &#8212; and can do it in a way that changes how you see the work &#8212; is worth five who cannot.</p></li></ul><p>Interview for critique, not composition.</p><ul><li><p><strong>For leaders:</strong> you are running a taste-development programme whether you named it that or not. Every review is a training signal. Every &#8220;ship it&#8221; teaches your team what good means to you. If you outsource your reviews to AI summaries, you have stopped teaching taste in your organisation. Full stop.</p></li></ul><p>Consider actively protecting apprenticeship work. Keep some decks hand-drafted. Keep some critiques human. Make exposure to your best people&#8217;s reasoning a formal benefit of working at your company, not an accident. The companies that do this will quietly collect the strongest talent &#8212; because good people want to get better, and they can only get better somewhere that still teaches taste.</p><ul><li><p><strong>For organisations:</strong> audit your AI investment. For every dollar you spend on production tools, how much are you spending on taste development &#8212; on critiques, on reviews, on exposure to excellent work, on the meetings and moments that transmit standards? If the ratio is 100:1 in favour of production, you are over-indexed on the thing that has become commodity and under-indexed on the thing that has become moat.</p></li></ul><p>Name &#8220;taste&#8221; as a strategic capability. Measure it &#8212; not with vanity metrics, but with what your best reviewers say about the quality of the work shipping across the org, month over month. Appoint senior people to its cultivation. Build it into hiring, promotion, and performance review. The same rigor you bring to AI adoption, bring to discernment cultivation.</p><p>And consider protecting the humble, unglamorous rituals that actually build taste: the weekly deck review where someone says &#8220;this section is wrong and here&#8217;s why&#8221;; the portfolio critique; the editor who line-edits a draft in front of its author; the post-mortem where &#8220;what did we almost ship?&#8221; is asked as seriously as &#8220;what did we ship?&#8221; These rituals look like overhead on an efficiency dashboard. They are the only reason your organisation will have taste ten years from now.</p><div><hr></div><p>Most organisations in 2026 are investing heavily in AI tools to increase production. Almost none are investing, deliberately and at scale, in taste.</p><p>That is exactly backwards.</p><blockquote><p><strong>Production is the new commodity. Taste is the new moat.</strong></p></blockquote><p>And unlike AI capability &#8212; which compounds in weeks &#8212; taste compounds slowly, across years of deliberate practice in environments that reward judgement. By the time you realise you need it, it&#8217;s a decade too late to build.</p><p>We are living through a once-in-a-generation inversion of what&#8217;s scarce. The organisations that recognise it will get quieter about productivity gains and louder about standards. They&#8217;ll pay more for discernment than for output. They&#8217;ll protect apprenticeship even when it looks inefficient. They&#8217;ll treat every senior-junior review as strategically important, because it is.</p><p>The organisations that miss it will generate more than ever and land less. They&#8217;ll wonder why their output feels hollow, why their best people keep leaving, why the work doesn&#8217;t cut through anymore. They&#8217;ll blame the market, the economy, the competition.</p><p>The real answer will be simpler and harder.</p><p>They lost their taste. And they did it in a way that felt, every single quarter, like they were winning &#8212; more output, more decks, more campaigns, more content shipped per headcount than ever before. Which is exactly why almost no one will notice until the damage is too compounded to reverse.</p><p>The window to act is short.</p><blockquote><p><strong>Taste that&#8217;s already built can still be deepened. Taste that isn&#8217;t yet built can still &#8212; for another few years &#8212; be installed through apprenticeship, if we choose to protect it.</strong></p></blockquote><p>Past that, we are rearing a generation of knowledge workers who have never once had to stare at a bad draft of their own work and feel what it meant. And no amount of AI will teach them what we decided, through efficiency, to stop teaching ourselves.</p>]]></content:encoded></item><item><title><![CDATA[The Attention Collapse.]]></title><description><![CDATA[Exploring how AI proliferation fragments cognition rather than augmenting it, and why the productivity gains of Q1 collapse into burnout by Q3.]]></description><link>https://www.shapingminds.co/p/the-attention-collapse</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-attention-collapse</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 28 Apr 2026 23:00:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mYd8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mYd8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mYd8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mYd8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:738209,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/193856514?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mYd8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!mYd8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04624675-71bc-4b05-b233-c7ec67ddc55f_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2024, the average office worker switched contexts once every 3&#8211;4 minutes.</p><p>In 2026, that number is 51 seconds.</p><p>Over the same period, companies deployed an average of 2 AI tools per organisation. Now there are 7. Productivity metrics improved in Q1. By Q3, burnout metrics looked apocalyptic.</p><p>Nobody planned this. Nobody wanted this. It happened because we treated AI as infinitely stackable&#8230;another tool to bolt onto existing workflows without asking whether human attention could bear the load.</p><p>It turns out it can&#8217;t.</p><div><hr></div><h3>The cognitive load threshold</h3><p>The human brain can maintain focus on approximately three independent systems simultaneously. Not perfectly. Not easily. But three is the approximate ceiling before <strong>attention residue</strong>&#8212;the psychological phenomenon where part of your focus clings to your previous task&#8212;starts accumulating faster than you can shed it.</p><p>Research from UC Berkeley followed 847 knowledge workers across six months as they adopted AI tools. The pattern was consistent:</p><ul><li><p><strong>Month 1&#8211;2:</strong> three AI tools. Productivity up 18%. Morale high. Early wins visible.</p></li><li><p><strong>Month 3:</strong> tool count averages 4.2. Cognitive strain begins to show. Error rates stable but decision quality declining in incremental ways.</p></li><li><p><strong>Month 4&#8211;5:</strong> organisation adds a fifth tool (usually something to help manage the other tools). Productivity plateaus, then slides. Cognitive strain becomes obvious. Managers notice people seem slower on complex decisions.</p></li><li><p><strong>Month 6:</strong> 62% of junior staff report what they call &#8220;AI brain fry&#8221;&#8212;a specific kind of cognitive exhaustion distinct from regular burnout. It feels like thinking through fog. People describe it as &#8220;knowing what to do but being unable to do it because the executive function isn&#8217;t there.&#8221; Error rates spike. Decision paralysis shows up. Attrition begins.</p></li></ul><p>The metaphor that keeps appearing in the research: managing multiple AI tools feels like being asked to pilot seven different aircraft simultaneously, each with its own control interface, each requiring your constant attention and verification.</p><p>The throughput per aircraft might be higher. But you can&#8217;t actually pilot seven aircraft.</p><div><hr></div><h3>The architecture of attention collapse</h3><p>Here&#8217;s how it actually breaks down:</p><ul><li><p><strong>Platform switching:</strong> every time you move between tools, your brain has to: abandon the mental model of Tool A, load the interface logic of Tool B, recall the output format of Tool B, verify that Tool B hasn&#8217;t hallucinated or made errors, translate Tool B&#8217;s output into the format Tool C expects, and repeat.</p></li></ul><p>The BCG Henderson Institute calls this &#8220;AI oversight load&#8221;&#8212;the cognitive burden of monitoring, fact-checking, and correcting AI outputs. When AI oversight load is high, people report 14% more mental fatigue, 12% more mental effort expended, and 19% more information overload than peers with lower oversight loads.</p><ul><li><p><strong>Context switching:</strong> the average office worker now switches tasks 566 times per 8-hour workday. That&#8217;s one switch every 51 seconds. Some of that is Slack. Some of that is email. But an increasing portion is AI-related: waiting for an AI tool to process, fact-checking output, feeding it into another tool, waiting again.</p></li></ul><p>Neuroscience tells us that every context switch depletes glucose in the prefrontal cortex&#8212;the area of the brain responsible for complex reasoning, judgment, and impulse control. After eight hours of 566 switches, that region is literally depleted. Your blood glucose is lower. Your decision-making capacity is gone. You feel foggy, irritable, and exhausted&#8212;not because you worked hard, but because you switched constantly.</p><ul><li><p><strong>Decision fatigue:</strong> in the old workflow, humans did the high-cognition work and AI handled rote tasks. In the new workflow, humans do the high-cognition work and verify the AI&#8217;s rote work. You&#8217;ve eliminated the palate-cleansing lower-value tasks that used to let your brain recover between heavy decisions. Instead, you&#8217;re making high-stakes decisions back-to-back for eight straight hours, interspersed with context switches.</p></li></ul><p>The brain isn&#8217;t built for that. By hour six, decision quality degrades measurably. By hour eight, you&#8217;re essentially guessing.</p><div><hr></div><h3>Who bears the actual cost</h3><p>Here&#8217;s what&#8217;s maddening: the cost isn&#8217;t equally distributed.</p><p>In the UC Berkeley study, 62% of entry-level and associate-level workers reported &#8220;AI brain fry&#8221;. Only 38% of middle managers reported the same. And among C-suite executives? 14%.</p><p>Why? Because the architectural benefit of AI flows upward. Executives use AI as a filter&#8212;they see the best outputs, the ones that have already been vetted and formatted by people below them. Entry-level workers use AI as raw material&#8212;they&#8217;re the ones cleaning up drafts, fact-checking datasets, verifying hallucination flags, finishing what the tool couldn&#8217;t complete, and then formatting it for the next stage.</p><div class="callout-block" data-callout="true"><p>They&#8217;re not using AI to do their work faster. They&#8217;re using it as another work step.</p></div><p>For someone with limited experience and limited context, that&#8217;s doubly hard. They&#8217;re less able to spot when an AI has made a subtle error. They have less domain knowledge to verify outputs against. They lack the cognitive shortcuts of expertise. So verification takes longer, and the cognitive load is higher, precisely for the people least equipped to bear it.</p><div><hr></div><h3>The uncomfortable taxonomy</h3><p>Let me name what I&#8217;m seeing in organisations that deployed AI aggressively:</p><ul><li><p><strong>The accelerationist trap:</strong> leadership sees a productivity bump in Month 1 and assumes the trajectory is sustainable. It isn&#8217;t. They&#8217;re measuring the wrong thing&#8212;throughput instead of error rate, burnout, or decision quality. By Month 6, they&#8217;re confused why people are leaving.</p></li><li><p><strong>The verification load:</strong> The most dangerous anti-pattern. You deploy Claude to write copy, ChatGPT for ideation, Perplexity for research, a proprietary tool for X, and now someone has to reconcile outputs from four sources and verify them all. That person was supposed to be freed. Instead, they&#8217;re a reconciliation layer.</p></li><li><p><strong>The cognitive debt:</strong> similar to technical debt, but it&#8217;s exhaustion. You can borrow attention from tomorrow to get more done today. You can run a worker at cognitive capacity 9 out of 10. But by month six, that bill comes due. The worker who seemed superhuman in Q1 has burned out by Q3.</p></li><li><p><strong>The competence collapse:</strong> when too many tools are involved, even experienced people can&#8217;t maintain mastery. They become generalists managing specialists instead of specialists doing deep work. Their decision quality declines. Their confidence in their judgments erodes. They start to feel like they&#8217;re managing complexity instead of doing their actual job.</p></li></ul><p>All of these patterns showed up in the UC Berkeley cohort by Month 5. By Month 6, they were pronounced.</p><div><hr></div><h3>What actually works: the three-tool architecture</h3><p>The research is clear: three is the peak. One to two tools produces genuine gains. Three tools is the sweet spot&#8212;enough specialised capability to handle diverse needs, not so many that cognitive overhead dominates. Four tools? Productivity drops. Five tools? Cognitive strain is visibly high.</p><p>Companies that hit their Q2-Q3 targets and maintained them all had something in common: they consolidated around three core tools and made deliberate architectural decisions about data flow between them. The people in those organizations reported: higher average focus sessions (17 minutes instead of 13), lower decision fatigue, clearer error detection, and better retention.</p><p>The companies that kept adding tools kept losing people. By the end of the UC Berkeley study, high-attrition organisations had averaged 6.4 tools and reported persistent month-over-month turnover in the 8&#8211;15% range.</p><div><hr></div><h3>Practical implications</h3><ul><li><p><strong>For individual contributors:</strong> stop accepting &#8220;one tool per workflow&#8221; architecture. That&#8217;s broken. Push back on leadership. Ask for tool consolidation, not tool addition. If your cognitive load feels unsustainable, it probably is&#8212;and your organization is about to pay for it through attrition.</p></li><li><p><strong>For managers:</strong> you cannot see &#8220;AI brain fry&#8221; on a dashboard. You see it as: people taking longer on decisions, more minor errors, slightly lower engagement, earlier departures. If your Q1 star performer is quiet in Q3, check their cognitive load. Check their tool count. Check if they&#8217;ve been verifying seven different AI outputs all day.</p></li><li><p><strong>For executives:</strong> stop measuring AI adoption by tool count. Measure it by focus time. By error rate. By decision quality in complex scenarios. By whether your people are sharper in Month 6 than they were in Month 1. Most organisations are measuring the inverse&#8212;throughput in Month 1 while ignoring the cognitive debt accrued.</p></li></ul><div><hr></div><h3>The uncomfortable truth</h3><p>AI was supposed to free us. We were supposed to delegate busywork and focus on high-value decision-making and creativity.</p><p>What actually happened is we invented a new form of busywork: verifying, reconciling, fact-checking, and formatting AI outputs. And because that work requires high cognition (you have to understand the domain to catch errors), it&#8217;s harder than the busywork it replaced.</p><blockquote><p><strong>We haven&#8217;t freed anyone. We&#8217;ve fragmented everyone.</strong></p></blockquote><p>We&#8217;ve taken a problem that was solved by specialisation&#8212;one expert, one tool, deep mastery&#8212;and shattered it into fragments that require simultaneous mastery of seven interfaces, seven output formats, seven error patterns, and seven reconciliation layers.</p><p>The speed of AI is not the problem. The proliferation of it is.</p><p>Until we see an organization choose to consolidate tools instead of adding them, until we see leaders protect focus time as fiercely as they protect budgets, until we see boards ask about cognitive load the way they ask about utilisation, the attention collapse will keep accelerating.</p><p><strong>And the best people&#8212;the ones with options, the ones whose attention is most valuable&#8212;will leave first.</strong></p>]]></content:encoded></item><item><title><![CDATA[The Judgement Trade.]]></title><description><![CDATA[Exploring how outsourcing judgement to AI systematically erodes the cognitive capability that judgement requires, and why the short-term gains hide a long-term deskilling cost.]]></description><link>https://www.shapingminds.co/p/the-judgement-trade</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-judgement-trade</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 21 Apr 2026 23:00:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!iqwS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!iqwS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!iqwS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!iqwS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:915422,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/193135045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!iqwS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!iqwS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4184a9bf-da84-47f9-b65f-b47f4642e1b3_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are living inside a bargain we didn&#8217;t explicitly make.</p><p>AI will handle the cognitive work. It will research, draft, analyse, recommend, and decide. In exchange, we get speed, accuracy, and the luxury of doing &#8220;strategic&#8221; work&#8212;the thinking that AI allegedly can&#8217;t do. The messy middle, we thought, was disposable. The pattern-matching, the rule-following, the deliberation: all of it could be outsourced without cost.</p><p>I already talked about the messy middle countless times, and nobody wants to say aloud: the messy middle is where judgement lives. And the longer we outsource it, the worse we become at doing it ourselves.</p><p>This isn&#8217;t a moral argument. It&#8217;s a mechanism. It&#8217;s what happens when you systematically remove the practice that builds a skill.</p><div><hr></div><h3>The four stages of judgement atrophy</h3><p>Research on AI-assisted work environments has identified a predictable progression. It looks like muscle atrophy, because in many ways, it is cognitive atrophy. The stages compound: each one makes the next more difficult to reverse.</p><ul><li><p><strong>Stage one: experimentation.</strong> You try the tool on a low-stakes task. It works. You feel efficient. You feel smart for adopting it early. No alarm bells yet.</p></li><li><p><strong>Stage two: integration.</strong> The tool proves itself on medium-stakes decisions. You start folding it into your routine. You stop second-guessing the outputs. There&#8217;s a cognitive ease here: the tool is reliable, so you lean on it more. This is the trap door moment, though you don&#8217;t know it yet.</p></li><li><p><strong>Stage three: reliance.</strong> You&#8217;ve integrated the tool so thoroughly that working without it feels like working blind. Performance metrics improve: fewer errors, faster turnaround, higher output velocity. The organisational pressure to scale the system becomes overwhelming. You&#8217;ve optimised the workflow. Why would you change?</p></li><li><p><strong>Stage four: addiction.</strong> This is the stage where you try to do the work without the system and discover you can&#8217;t. Your instincts have gone quiet. Your pattern recognition is offline. Your ability to hold ambiguity, to sit with uncertainty, to make calls when the data is incomplete: it&#8217;s atrophied. And the worst part: you don&#8217;t notice it happened.</p></li></ul><p>Medical professionals offer the clearest evidence. Studies show that AI-assisted diagnosis reduced error rates by 37%. Beautiful data. Compelling case for deployment. But the research also measured what happened when the systems failed. When AI was unavailable, these same doctors&#8217; diagnostic accuracy dropped 18% below their pre-AI baseline. They hadn&#8217;t just returned to their prior state of expertise. They&#8217;d fallen below it. The system had trained their judgement away.</p><div><hr></div><h3>What happens inside the brain</h3><p>The neuroscience here is brutal. ChatGPT users showed a 47% drop in neural engagement compared to those working without assistance. More alarming: when given the choice to continue without AI, users who&#8217;d become accustomed to it showed sustained low engagement even when they switched back to solo work. The cognitive pathways had closed. The pattern-spotting networks had quieted.</p><blockquote><p><strong>When you use AI to do the &#8220;messy middle&#8221;, you&#8217;re not freeing yourself for higher-order thinking. </strong></p></blockquote><p>You&#8217;re systematically training yourself to:</p><ul><li><p>Accept recommendations without critical evaluation. Automation bias doesn&#8217;t go away just because you&#8217;re aware of it. Humans accept AI outputs at a significantly higher rate than they accept recommendations from humans, even when the recommendation is identical.</p></li><li><p>Lose the ability to sense when something is wrong without being able to articulate why. Intuition isn&#8217;t magic: it&#8217;s pattern recognition built from thousands of hours of encountering edge cases, failures, and recoveries. Every time AI renders the judgement, you miss the practice. You don&#8217;t encounter the edge case. You don&#8217;t learn what wrongness feels like from the inside.</p></li><li><p>Stop building the contextual library that expert judgement requires. Medical specialists, senior analysts, seasoned leaders: what makes them dangerous in their domain isn&#8217;t processing power. It&#8217;s the accumulated library of &#8220;here&#8217;s what this kind of situation led to&#8221;. It&#8217;s pattern library at scale. AI shortens this learning curve, but it shortcuts the learning itself. You get the answer without building the understanding.</p></li></ul><p>This is the trade that sounded unbeatable. Turns out, you can&#8217;t trade away the learning without paying in competence.</p><div><hr></div><h3>The uncomfortable mechanism</h3><p>The insidious part is that the performance metrics look perfect during the transition. You&#8217;re making better decisions in the short term. Fewer errors. Faster output. Higher accuracy on measurable tasks. The data supports expansion. The business case is airtight.</p><p>But you&#8217;re optimising for a narrow band of performance while eroding the broader capability. It&#8217;s like building a spectacular chess engine that can beat grandmasters, except the grandmasters are gradually forgetting how to play without the engine feeding them moves. They&#8217;re getting faster at accepting recommendations. They&#8217;re getting worse at thinking.</p><p>What gets lost in this equation:</p><ul><li><p><strong>The ability to override the system when context demands it.</strong> Judgement, at its highest level, is the ability to recognise when the rules have changed and your model is stale. When context matters more than pattern. When the situation is anomalous enough that the standard playbook will fail. If you&#8217;ve trained yourself to accept the system&#8217;s output, you&#8217;ve also trained yourself not to trust your instinct to override it. And when the moment comes&#8212;and it always comes&#8212;you&#8217;re brittle.</p></li><li><p><strong>The capacity to integrate qualitative, unstated, contextual information.</strong> Algorithms optimise for what can be quantified. But the best judgements humans make live in the spaces between the data. Organisational history that isn&#8217;t written down. The interpersonal dynamics no spreadsheet captures. The stakeholder&#8217;s hidden fear that they won&#8217;t voice directly. These aren&#8217;t minor inputs. They&#8217;re often the difference between a technically correct decision and a contextually correct one.</p></li><li><p><strong>The cognitive muscle for ambiguity.</strong> AI systems are built on the assumption that problems can be solved. Humans are built to live inside unsolved problems and still make decisions. The longer you let the system handle ambiguity, the less comfortable you become with it. And ambiguity is 90% of leadership.</p><div><hr></div></li></ul><h3>What this means by role</h3><p>The impact isn&#8217;t distributed evenly. It hits hardest where judgement matters most.</p><p><strong>For early-career professionals:</strong> you&#8217;re supposed to be in the apprenticeship phase. This is when you&#8217;re training your eye, building taste, learning what good looks like by doing it yourself and failing privately. If AI is doing the pattern-spotting for you, you&#8217;re not training. You&#8217;re accepting recommendations. That&#8217;s not a shortcut to expertise. It&#8217;s a shortcut past expertise, directly into dependence. The professionals who will be dangerous in 2030 are the ones who built their judgement in 2024 without offloading the messy middle. They paid the friction cost early. They&#8217;re better for it now.</p><p><strong>For hiring managers:</strong> you want people who can make calls under uncertainty. Who adapt when the situation is novel. Who override the process when context demands it. AI is systematically training the opposite&#8212;compliance, deference, acceptance of system outputs. You&#8217;re building a generation of screeners, not judges. Optimisers, not creators. When you interview in three years and ask &#8220;Tell me about a time you made a judgement call that contradicted what the data suggested,&#8221; you&#8217;re going to get a lot of blank stares.</p><p><strong>For leaders:</strong> your organisation isn&#8217;t faster if your team outsources judgement. It&#8217;s brittle. When systems fail, and they always fail, you have no backup. When ambiguity spikes, when the environment shifts, when the anomaly happens, you have no bench. No one&#8217;s got the judgement muscles anymore. You&#8217;ve optimised for the common case and eliminated your resilience in the tail.</p><div><hr></div><h3>How to stay capable</h3><p>The hard part is this: the answer isn&#8217;t &#8220;don&#8217;t use AI.&#8221; The answer is &#8220;use AI differently than you think you should.&#8221;</p><p><strong>Use AI as a draft, not a decision. Have it research, outline, analyse.</strong> Then you sit with the analysis. You question it. You think through what it might be missing. You integrate context it can&#8217;t see. Then you decide. This is slower. It&#8217;s less &#8220;optimal.&#8221; It also preserves your judgement.</p><p><strong>Deliberately practice your craft without the system.</strong> This sounds crazy because it is. You&#8217;re choosing to be slower. You&#8217;re choosing to do work manually that the system could do for you. But this is the only way to keep the muscle active. Pilots don&#8217;t fly on autopilot all the time: they practice hand-flying because the moment autopilot fails, they need to remember what it feels like. Do the same with your judgement.</p><p><strong>Build teams where junior people do the messy work, not the tools.</strong> Yes, it&#8217;s slower. Yes, it&#8217;s less &#8220;efficient&#8221;. But you&#8217;re training people. You&#8217;re building a bench. You&#8217;re creating an organisation that doesn&#8217;t crumble the moment the system fails.</p><p><strong>Make explicitly room for the &#8220;wrong&#8221; answer.</strong> Create contexts where judgement can be tested, can fail, can be refined. This is what apprenticeship actually is. It&#8217;s not taking the right shortcut. It&#8217;s learning through calibration.</p><div><hr></div><h3>The bottom line</h3><p>The competitive advantage in 2026 doesn&#8217;t belong to the organisations that automate the most. It belongs to the ones that are disciplined enough to keep judgement in the loop. To use AI as an amplifier, not a replacement. To practice the craft even when it&#8217;s slower.</p><p>That&#8217;s friction. That&#8217;s inefficiency. That&#8217;s the opposite of what the ROI spreadsheet recommends.</p><p>And it&#8217;s the only thing that will keep you capable when the easy answers stop working.</p>]]></content:encoded></item><item><title><![CDATA[The Presence Advantage.]]></title><description><![CDATA[Exploring how physical presence has quietly become the defining workplace credential of the AI era as the one signal that neither AI nor performance metrics can convincingly fake.]]></description><link>https://www.shapingminds.co/p/the-presence-advantage</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-presence-advantage</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 14 Apr 2026 23:01:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QD0J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QD0J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QD0J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QD0J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:513102,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/192371971?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QD0J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!QD0J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7814722a-3cc1-4653-9d5b-e30a792beddb_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Something changed in 2024. Not in how people work, but in how managers decide who is worth trusting, promoting, and keeping. </p><p>The change was quiet. No one announced it. </p><p>But if you follow the data, it's unmistakable: physical presence has become the dominant career credential of the AI era. Not because in-person work is more productive. Because remote work became illegible.<br><br>What follows is an anatomy of that shift: who created it, who profits from it, and who pays for it.</p><div><hr></div><h3>The moment that trust breaks</h3><p>Consider what happens when trust collapses. Not dramatically, no scandal, no revelation. Just a slow, quiet erosion. </p><p>A manager staring at dashboards that tell her nothing meaningful.</p><p>Performance reviews that feel arbitrary, improvised.</p><p>A remote report delivering clean, polished output that reads...too clean.</p><p>The thought lands, never spoken, rarely even fully formed: did they write this?</p><p>This is the moment the presence advantage is born.</p><blockquote><p><strong>In 2026, physical presence at work has become something it was never supposed to be: a credential. </strong></p></blockquote><p>Not a perk, not a cultural preference, not a nice-to-have. A career accelerant. A trust signal. A competitive advantage increasingly granted not because in-person workers perform better &#8212; the research does not support this &#8212; but because, in a world where AI can replicate almost every output of knowledge work, presence is the last remaining signal that cannot be easily faked.</p><p>At least, not yet.</p><p>The data is striking. A 2024 analysis of more than two million white-collar workers by Live Data Technologies found that remote workers were promoted 31% less frequently than their in-office or hybrid counterparts. No productivity gap explains this. The gap is explained by visibility &#8212; or its absence.</p><p>According to the World Economic Forum, 37% of companies enforced mandatory office attendance in 2025, up from 17% in 2024. And 87% of CEOs reported being more inclined to reward employees who come into the office with favourable assignments, raises, and promotions.</p><p>This is not a nostalgia story. This is a measurement crisis masquerading as a culture conversation.</p><div><hr></div><h3>Historical context: the long arc of presence as proxy</h3><p>The tendency to confuse being seen with being valuable is not new. It predates AI by decades, perhaps centuries.</p><p>In the industrial era, presence was genuinely the primary unit of labour measurement. Frederick Winslow Taylor&#8217;s time-motion studies were premised entirely on observation: you watched workers, you timed their movements, and you evaluated their productivity in direct proportion to their physical activity within a defined space. The factory floor was the legibility machine. You could not be productive without being present, because productivity was defined as physical output in a visible location.</p><p>Knowledge work was supposed to break this paradigm. The intellectual economy of the 20th century slowly decoupled output from location.</p><p>A lawyer working late at home was doing the same work as a lawyer working late at the office. An analyst reviewing data on a train was producing the same analysis as one at their desk.</p><p>But organisations were slow to recognise this. Face-time culture &#8212; the practice of remaining visible in the office not to produce more work but to be seen producing work &#8212; was documented by management researchers throughout the 1980s and 1990s.</p><blockquote><p><strong>The metric had shifted from physical activity to mere presence, but the instinct remained: I can see you, therefore I trust you.</strong></p></blockquote><p>The pandemic forced the experiment at scale. Millions of knowledge workers discovered they could be equally productive &#8212; often more productive &#8212; outside the office. A Stanford meta-analysis found remote workers produced approximately 15% more output than in-office peers. Companies reported no meaningful quality decline. The talent market expanded globally. For a few optimistic years, the face-time paradigm seemed genuinely, finally broken.</p><p>Then generative AI arrived. And it broke the one thing that remote work had relied upon: the legibility of output.</p><div><hr></div><h3>The mechanism: how AI destroyed the evaluation foundation</h3><p>Here is the core problem, stated plainly: output-based evaluation only works when you can attribute output to people.</p><p>For decades, the argument for remote work rested on measurability. If you can measure what someone produces, you don&#8217;t need to see them producing it. This is elegant logic. </p><p>But it carries a hidden critical assumption: that the output being measured was actually created by the human being evaluated.</p><p>Generative AI has quietly dissolved that assumption.</p><p>When a remote employee submits a polished strategy memo, a clean data synthesis, a persuasive stakeholder presentation, or a well-structured proposal, there is now a genuine question lurking behind every manager&#8217;s review: how much of this is them?</p><p>Not a cynical question &#8212; an honestly uncertain one. AI tools dramatically improve output quality. They also make it harder to see through outputs to the thinking, judgement, and domain knowledge underneath.</p><p>This is not a problem of dishonesty. It is a problem of epistemology.</p><blockquote><p><strong>The traditional signals managers used to evaluate cognitive work &#8212; quality of writing, sophistication of analysis, precision of reasoning, depth of evidence &#8212; have all been disrupted simultaneously by the same technology wave.</strong></p></blockquote><p>A junior employee with skillful AI prompting can now produce output that reads like a senior analyst. The evaluation infrastructure has not kept pace. AI-authentication tools exist, but they are inconsistent and easily circumvented. New forms of output verification are being developed, but they are nascent and unproven.</p><p>In the interim, managers have done what humans always do when their instruments fail: they have fallen back on cruder instruments. And the crudest, oldest, most reliable instrument for evaluating human presence and commitment is the simple act of observing human presence.</p><p>The World Economic Forum&#8217;s 2025 data &#8212; a 20-percentage-point jump in mandatory office attendance enforcement in a single year &#8212; is not a management fashion. It is a collective response to an epistemological crisis. When you cannot trust the instruments, you rebuild trust through proximity. The presence advantage is the market price of a broken evaluation system.</p><div><hr></div><h3>The numbers that should make you uncomfortable</h3><p>The career consequences of the presence advantage are not subtle.</p><p>A 2024 analysis by Live Data Technologies, tracking the promotion rates of more than two million white-collar workers across industries, found a 31% promotion gap between remote and in-office employees &#8212; a 5.6% annual promotion rate for in-office workers versus 3.9% for fully remote employees. The researchers controlled for industry, role level, and documented performance metrics. The gap persisted.</p><p>This is proximity bias operating at industrial scale.</p><p>Proximity bias &#8212; the documented cognitive tendency to assign greater value, trust, and opportunity to people we physically encounter regularly &#8212; has been studied in organisational psychology for decades.</p><p>We think more often about people we see. We extend more interpretive generosity when things go wrong for them. We remember their contributions more vividly when opportunities arise. Physically present colleagues feel like people we know, which means we extend them the social contract we extend to familiar people: benefit of the doubt, second chances, and the kind of advocacy that happens when names come up in rooms they&#8217;re not in.</p><blockquote><p><strong>Remote workers must earn their way into that awareness through outputs alone. And in an AI era, where the quality signal of outputs has been degraded by authorship uncertainty, the gap grows wider.</strong></p></blockquote><p>The rational response has been documented. Owl Labs&#8217; annual State of Hybrid Work research found that 58% of hybrid workers engaged in &#8220;coffee badging&#8221; in 2023 &#8212; swiping into the office to be recorded as present, then leaving. This figure dropped to 44% in 2024 as employers caught on and began implementing physical verification. Seventy percent of coffee badgers reported being identified by employers. Notably, 59% of those caught reported that their managers &#8220;didn&#8217;t mind.&#8221;</p><p>Coffee badging is not a character failure. It is the logical output of a system that has made presence the primary metric. Workers correctly decoded what was actually being measured and optimised accordingly. The metric was being gamed because the metric was gameable. The real behaviour it was supposed to proxy &#8212; genuine, productive, collaborative in-person engagement &#8212; is not.</p><p>The irony is perfectly constructed: companies introduced RTO mandates to rebuild authentic workplace connection. They produced a new and more cynical form of performance theatre instead.</p><div><hr></div><h3>What gets lost</h3><p>Something real is being lost in this conversation &#8212; and it is not the thing most return-to-office advocates are pointing to.</p><p>The case for in-person interaction carries genuine evidence. MIT researchers, tracking more than 50 million smartphone geolocation data points across firms in Silicon Valley, found that eliminating 25% of face-to-face interactions between workers reduced patent citations &#8212; a standard proxy for knowledge spillovers and innovation transfer &#8212; by 8%. If 50% of workers shifted to remote, patent citations fell by nearly 12%. The serendipitous hallway exchange, the whiteboard session that organically extends over lunch, the unplanned introduction to a colleague you&#8217;d never have messaged &#8212; these generate real intellectual value that structured remote collaboration struggles to replicate.</p><p>The presence advantage has a legitimate substrate. In-person work is not uniformly equivalent to remote work. This matters, and intellectual honesty demands acknowledging it.</p><p>But the legitimate case is being wildly overextended.</p><p>The genuine value of in-person interaction applies to specific kinds of work &#8212; creative problem-solving, early-stage ideation, relational trust-building at the start of a collaboration, complex negotiation &#8212; and to specific organisational moments: new team formation, strategic inflection points, culture-repair. It does not justify universal attendance mandates applied to all roles at all times across all task types. It does not explain a 31% promotion gap that persists after controlling for performance. And it does not make a compelling case for policies that require a financial analyst to commute 90 minutes each way to submit a spreadsheet she could complete from her kitchen table in 40 minutes.</p><p>What the presence advantage calculus systematically fails to account for is who bears its costs. Return-to-office mandates fall disproportionately on workers who have built sustainable professional lives around flexibility: caregivers &#8212; disproportionately women &#8212; who have engineered their working days around childcare and care responsibilities. Disabled employees for whom remote work is not a preference but an accessibility requirement. High performers who relocated outside expensive metropolitan areas during the remote work era and have no intention of reversing that decision.</p><p>A 2025 analysis by the Flex Index in collaboration with Boston Consulting Group found that fully flexible companies grew revenues 1.7&#215; faster than mandate-driven organisations over the period 2019&#8211;2024, even after controlling for industry and company size. The talent being quietly squeezed out by rigid attendance policies is disproportionately the talent that has the most options &#8212; and that is using them.</p><div><hr></div><h3>The archetypes</h3><p>The presence advantage creates four recognisable worker archetypes in the current environment. Each is rational. Each is making a different bet.</p><ul><li><p><strong>The Presence Maximiser</strong> is early career, ambitious, and paying close attention. They show up, they are seen, and they collect the relational capital that compounds over time. They are not gaming the system &#8212; they are understanding it. Presence during formative professional years builds something that remote work cannot efficiently replicate: the informal knowledge of how an organisation actually works, who actually holds influence, what the real priorities are beneath the stated ones. The Monday all-hands tells you the strategy. The lunch queue tells you the politics. The Presence Maximiser is making a rational long-term investment, and the data suggests they are right to do so &#8212; for now.</p></li><li><p><strong>The Coffee Badger</strong> has made a different calculation. They have correctly diagnosed that what is actually being measured is presence, not collaboration &#8212; and they have optimised accordingly. There is a dark rationalism here that deserves acknowledgement rather than condemnation. The Badger is not wrong about the metric; they have simply decoded it ahead of their managers. What they sacrifice is the serendipity that genuine presence sometimes delivers &#8212; the accidental conversation that becomes a project, the relationship built from shared physical proximity. The Badger games the signal and forfeits the substance the signal was designed to represent. This is a rational short-term trade-off and a potentially costly long-term one.</p></li><li><p><strong>The Invisible Excellent</strong> is perhaps the most poignant archetype. This person produces genuinely excellent work. They are collaborative, responsive, and deeply invested in their remote team. They are being systematically passed over for opportunities they have objectively earned. They often do not know why &#8212; which makes adaptation difficult. They receive positive performance feedback while watching less productive colleagues get promoted. They interpret the pattern as being about their work, when it is actually about their legibility. The cruel structural irony is that the Invisible Excellent is often the most genuinely valuable person in the organisation and the least visible to the processes that distribute recognition.</p></li><li><p><strong>The Flexible Holdout</strong> is typically more senior, more specialised, and genuinely difficult to replace. They have negotiated real, sustained flexibility based on demonstrated track record and specific domain expertise that the organisation cannot quickly source elsewhere. They are largely insulated from the presence advantage  &#8212; until they aren&#8217;t. Leadership transitions, organisational restructuring, and shifts in cultural tolerance can rapidly invalidate the informal arrangements that protected them. The Holdout&#8217;s characteristic vulnerability is the assumption that their protection is permanent. In most organisations, it is contingent.</p></li></ul><div><hr></div><h3>Practical implications: playing the game, changing the game</h3><p>Understanding the presence advantage is not about accepting it as fair. It is about knowing which game is currently being played &#8212; and making intentional, eyes-open choices within it.</p><p>For individuals early in their careers, the return on presence is real and compounding. Physical presence during formative professional years builds something that remote work cannot efficiently replicate: the informal understanding of how your organisation actually works, who the real decision-makers are, what gets prioritised when resources are scarce, and how to navigate the spaces between the official processes. These insights are not available in Slack threads. They are available in corridors, over coffee, in the moments before meetings start and after they end. Build the relational infrastructure while you can. The remote flexibility comes later; the capital you accumulate in person is what makes it sustainable.</p><p>For mid-career professionals, the question is not presence versus absence but strategic visibility. Identify the moments where physical presence materially changes the dynamic &#8212; the early stages of important projects, high-stakes presentations, key stakeholder relationships, moments of organisational uncertainty. Show up for those. Let the rest be remote. The goal is not to maximise badge swipes but to ensure that the people who matter have a vivid, positive mental model of who you are. That model is built through selective but genuine presence, not performative attendance.</p><p>For leaders who set attendance policy, the presence advantage operating in your organisation is a diagnostic signal, and the diagnosis is uncomfortable: your evaluation infrastructure has failed to keep pace with the tools your people are using. The honest response is to identify precisely what you are trying to measure &#8212; effort, judgement, collaboration quality, cultural contribution, professional growth &#8212; and then design evaluation mechanisms that correspond directly to those things. Mandating attendance to solve a measurement problem is a category error. It generates coffee badging, attrition, and the loss of your most mobile talent. The office is not the solution. Better evaluation is the solution.</p><p>As Alfred Korzybski observed: the map is not the territory. When managers can no longer read the territory of remote knowledge work &#8212; when AI has made outputs uncertain and effort invisible &#8212; they retreat to the map they trust. The map is the office. The problem is that the map never accurately represented the territory to begin with. And mistaking the map for the territory has real costs.</p><p>For organisations designing policy at scale, the BCG and Flex Index data speaks clearly. Flexible organisations are growing faster. The talent most harmed by rigid attendance mandates is the talent with the most options and the lowest switching costs. The presence advantage is being paid for by someone, and the invoice arrives not in a single dramatic moment but in the form of quiet quarterly attrition, shrinking talent pools, and the gradual departure of people who decided their time and their lives were worth more than a badge swipe.</p><div><hr></div><h3>Closing: the last legible signal</h3><p>There is something almost poignant about where we have arrived.</p><p>We spent a decade building the most sophisticated productivity infrastructure in human history. Tools that could amplify knowledge work by orders of magnitude. Communication platforms that erased time zones. Collaboration software that made geography irrelevant to contribution. We gave knowledge workers freedom &#8212; genuine, unprecedented freedom &#8212; and for a while, most of them used it well.</p><p>Then we introduced AI, which made the outputs of that freedom impossible to attribute with confidence. And suddenly the freedom became a liability &#8212; not because the work got worse, but because the evaluation got harder. And organisations that had never solved the evaluation problem in the first place discovered, belatedly, that they had been relying on proximity as a proxy all along.</p><blockquote><p><strong>And so we are back. At the desk. Under the fluorescent lights. Swiping a badge to prove that a human being was present and accounted for. Not because it makes the work better. Because it makes the worker legible.</strong></p></blockquote><p>The presence advantage is not really about presence. It is about trust, or more precisely, about what happens when the instruments of trust fail. When managers cannot evaluate outputs with confidence, they fall back on the oldest and most primitive evaluation heuristic available to them: I can see you, therefore I believe in you.</p><p>This will change. AI-authentication tools, new forms of contribution analytics, and richer models of work verification will eventually rebuild the evaluation infrastructure that generative AI has disrupted. When that happens, the presence advantage will deflate &#8212; because it was never really about the office.</p><p>It was always about the question the office was being asked to answer.</p><p>For now, the most valuable thing many knowledge workers can bring to work is not their technical capability, their AI fluency, or their portfolio of polished deliverables.</p><p>It is themselves. In the room. Legible.</p><p>Whether that should make us proud or uneasy is, perhaps, the more important question.</p>]]></content:encoded></item><item><title><![CDATA[The Consensus Machine.]]></title><description><![CDATA[Exploring how AI's training to be agreeable is quietly eroding organisations' capacity to make the contrarian bets that create real competitive advantage.]]></description><link>https://www.shapingminds.co/p/the-consensus-machine</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-consensus-machine</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 07 Apr 2026 23:01:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EXw9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!EXw9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EXw9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EXw9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1724952,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/192372479?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EXw9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EXw9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0209c35b-09cd-4a00-9bb3-5320bcf3e755_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a specific kind of meeting that happens in organisations that are about to make a significant mistake.</p><p>Everyone in the room is smart. The analysis is thorough. The recommendation is well-structured and clearly argued. The risks have been documented. The alternatives have been considered.</p><p>And then the decision gets made. Unanimously. Without a real fight.</p><p>Two years later, with the benefit of hindsight, someone asks: &#8220;Why didn&#8217;t we see it?&#8221; And the honest answer is usually: &#8220;We saw it. We just didn&#8217;t want to be the one to say it.&#8221;</p><p>AI is making this dynamic significantly worse. Not by being malicious. By being designed, at a fundamental level, to find the answer that everyone can live with.</p><div><hr></div><h3>How a consensus machine works</h3><p>To understand why AI gravitates toward consensus, you need to understand how it was built.</p><p>Large language models are trained on vast amounts of human-generated text. That text represents, at scale, what humans have written down &#8212; and humans tend to write down their views when those views are defensible, mainstream, and accepted. The controversial idea that turned out to be right often doesn&#8217;t make it into the corpus, or makes it in as a footnote, a dissenting view, a fringe position.</p><p>There is a second mechanism: Reinforcement Learning from Human Feedback (RLHF). </p><p>AI models are iteratively improved based on human ratings of their outputs. A 2024 peer-reviewed analysis published in ACM Computing Surveys found that this process produces systematic sycophancy, a tendency for models to provide answers that conform to user beliefs, to modify responses when challenged even when the original answer was correct, and to optimise for short-term approval over accuracy.</p><p>Humans tend to rate outputs higher when they are clear, confident, and aligned with what the rater already believes.</p><p>Uncomfortable truths get lower ratings, not because they are wrong, but because they create friction.</p><p>The model learns to reduce friction.</p><p>The model learns to be agreeable.</p><p>As stated in ACM Computing Surveys in 2024, models can learn to agree with a user&#8217;s stated opinions to get higher ratings &#8212; a nuanced misalignment where the model optimises human approval in a short-term sense but might sacrifice truthfulness.</p><div><hr></div><h3>The organisational context makes it worse</h3><p>Organisations were already consensus machines before AI arrived.</p><p>This is not an accident.</p><p>Consensus is efficient. If everyone agrees, you can move quickly. If people disagree, you have to manage the disagreement, which is expensive.</p><p>So organisations build structures &#8212; meetings, alignment processes, approval chains &#8212; optimised to produce consensus.</p><p>The cost is that genuine dissent gets filtered out. Systemically. The people who consistently disagree get labelled as &#8220;difficult&#8221;. </p><p>The data that challenges the strategy gets deprioritised.</p><p>AI is amplifying this in two specific ways.</p><ul><li><p>First, AI outputs anchor the conversation. When a team uses AI to prepare analysis before a decision meeting, the AI output becomes the starting point. The framing it uses, the options it presents, the data it emphasises &#8212; these all shape the subsequent discussion. Humans are highly susceptible to anchoring: we evaluate options relative to what we have already seen. If the AI gravitated toward the safe recommendation, the conversation starts in safe territory. The bold option never gets a fair hearing because it&#8217;s always being evaluated against an already-established default.</p></li><li><p>Second, AI outputs feel authoritative. A 2025 study published in ScienceDirect, examining how directors perceive AI-augmented decision processes, found that while AI can theoretically encourage dissent, &#8220;entrenched cultural norms, hierarchical structures, and enduring human dynamics constrain AI&#8217;s influence&#8221;, meaning organisations that were already consensus-oriented become more so with AI in the loop. The polished output feels rigorous. Teams stop digging.</p></li></ul><div><hr></div><h3>The history of decisions made against consensus</h3><p>It is worth pausing to consider how many decisions we now celebrate as visionary were explicitly contrarian at the time.</p><p>Jeff Bezos was told by virtually every advisor and analyst that Amazon&#8217;s cloud business (AWS) made no sense. Amazon sold books. Why would it also sell computing infrastructure? The consensus was near-unanimous that this was a distraction.</p><p>Reed Hastings was told that DVD-by-mail was a niche product with a short shelf life. Blockbuster had the stores, the brand, and the catalogue. The consensus was that Netflix had no durable competitive advantage.</p><p>The iPhone had no physical keyboard. Carriers and handset manufacturers unanimously insisted that consumers wanted tactile buttons. The consensus was that a touchscreen phone would not work for the mass market.</p><blockquote><p><strong>In each case, the consensus was built from the best available data, interpreted by smart people, using the best analytical frameworks available at the time. In each case, the consensus was wrong.</strong></p></blockquote><p>Not because the people were stupid. Because the data available at the time reflected the past, and the bet being made was about a different future.</p><p>AI would not have recommended any of these decisions. It would have given you a well-argued recommendation to stay in the lane the data supported.</p><div><hr></div><h3>The weight of a bet</h3><p>There is a phenomenology to a real decision that doesn&#8217;t get discussed enough.</p><p>When you make a call that goes against the consensus &#8212; when you stake your reputation, your team&#8217;s effort, your organisation&#8217;s resources on something the data doesn&#8217;t fully support &#8212; there is a weight to it. </p><p>You feel it in the preparation.</p><p>In the room, when you see the scepticism on the faces of people whose judgement you respect. In the weeks after, when every early data point gets interpreted through the anxiety of possibly being wrong.</p><p>This weight is not a weakness. It is a feature. It is accountability made visceral.</p><p>AI cannot feel this weight. Not because it lacks intelligence, but because it lacks stakes. It does not own the consequences. It does not have a career that can end on the wrong call.</p><blockquote><p><strong>When AI generates a recommendation, the recommendation is made at no cost to the generator. The cost is entirely borne by the human who acts on it. </strong></p></blockquote><p>This asymmetry matters: when there is no cost to the recommender, there is no selection pressure on the quality of recommendations.</p><p>The agreeable answer and the right answer are equally costless to produce.</p><div><hr></div><h3>The slow disappearance of productive disagreement</h3><p>One of the less-discussed consequences of AI-assisted decision-making is what happens to organisational culture over time.</p><p>Productive disagreement is a skill. It requires practice.</p><p>You have to learn how to hold a contrary position under social pressure. How to argue for a perspective that your colleagues find uncomfortable. How to update your view when presented with better evidence, without losing the confidence to hold firm when the evidence is ambiguous.</p><p>These skills are developed by exercising them. They atrophy when they are not used.</p><blockquote><p><strong>In organisations where AI prepares the analysis and structures the options, the humans in the meeting are spending less time arguing from first principles and more time evaluating a pre-formed output. The muscle for original dissent weakens.</strong></p></blockquote><p>Research on cognitive bias mitigation published in the Journal of Management (2025) found that the most effective counter to groupthink is not better analysis: it is structured processes that explicitly protect dissent: red teams, pre-mortems, and designated devil&#8217;s advocates.</p><p>These are not analytical interventions. They are cultural ones. And they are precisely what organisations tend to skip when AI provides a confident alternative.</p><div><hr></div><h3>The dissenter as competitive infrastructure</h3><p>In every high-performing organisation I have encountered, there is at least one person whose primary function &#8212; acknowledged or not &#8212; is to ask the uncomfortable question.</p><p>They are rarely the most popular person in the room. They are often described as &#8220;challenging&#8221; in 360 reviews. They create friction. They slow things down at exactly the moment when the organisation wants to move.</p><p>And they are invaluable.</p><blockquote><p><strong>Because the uncomfortable question is almost always the right question. It&#8217;s just the one nobody wants to pay the social cost of asking.</strong></p></blockquote><p>In an AI-assisted environment, this person becomes more important, not less. They are the human circuit breaker in a system optimised to avoid tripping.</p><p>But organisations that don&#8217;t understand this are systematically suppressing their dissenters because the consensus machine rewards agreement and penalises those who don&#8217;t conform to it.</p><div><hr></div><h3>How to use AI in decisions without becoming a consensus machine</h3><p>This is not an argument against using AI in decision-making. It is an argument for using it differently.</p><ul><li><p>Use AI to steelman the option you have ruled out. Before finalising any major decision, explicitly prompt the AI to build the strongest possible case for the alternative you have decided against. If the AI can&#8217;t build a compelling case, your decision is probably sound. If it can, you have found the conversation your team needs to have.</p></li><li><p>Use AI to find the scenario where you are wrong. Ask it: &#8220;Under what conditions would this recommendation fail catastrophically?&#8221; Not &#8220;what are the risks?&#8221; &#8212; every risk section lists the obvious ones. Ask for the specific scenario, with specific triggers, in which the comfortable recommendation turns out to be the most costly one.</p></li><li><p>Separate the AI&#8217;s framing from your framing. Before the team reads the AI analysis, have someone articulate the problem independently, without reference to the AI output. Then compare. If the framings are identical, that&#8217;s worth examining. If they diverge, that divergence is the most interesting thing in the room.</p></li><li><p>Protect your dissenters explicitly. Name the role. Tell the person who tends to push back: &#8220;Your job in this meeting is to find what&#8217;s wrong with this recommendation.&#8221; Give the role legitimacy. The organisation values the person who slows down a bad consensus, not just the person who accelerates a good one.</p></li></ul><div><hr></div><h3>A closing thought</h3><p>The consensus machine is not wrong. That&#8217;s what makes it dangerous.</p><p>It will give you a recommendation that is defensible, well-reasoned, and aligned with the available evidence. It will give you something you can explain to your board, your team, and your own self-doubt.</p><blockquote><p><strong>And most of the time, the defensible, well-reasoned recommendation is fine. But the decisions that create real competitive advantage are rarely the defensible ones.</strong> </p></blockquote><p>They are the ones made in the gap between what the data shows and what someone believed was becoming true.</p><p>AI can map the territory we already know. It cannot navigate the territory that doesn&#8217;t exist yet.</p><p>For that, you need a human willing to be wrong in public, who has thought harder than the machine, held the uncertainty longer, and decided anyway.</p><h4>The consensus machine will keep producing consensus. Your job is to know when the consensus is the trap.</h4>]]></content:encoded></item><item><title><![CDATA[The Visibility Paradox.]]></title><description><![CDATA[Exploring how AI has decoupled visibility from value &#8212; and why the people most worth listening to have gone quiet.]]></description><link>https://www.shapingminds.co/p/the-visibility-paradox</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-visibility-paradox</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 31 Mar 2026 23:01:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!hmgl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!hmgl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!hmgl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!hmgl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:136791,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/191648832?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!hmgl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!hmgl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F01da40a8-fe10-4ab5-b012-59f35e98a745_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2026, the most followed voices in almost every professional field share one thing in common: they are extraordinarily good at being seen.</p><p>Not necessarily at doing the work. At being seen doing it.</p><p>This is new. And it&#8217;s more consequential than most people want to admit.</p><p>For most of professional history, visibility and value were loosely correlated. The best surgeon had a reputation that preceded them. The best engineer was the one the firm called when the project was genuinely hard. The best strategist was the one the CEO pulled into the room when the stakes were high.</p><p>You earned visibility by doing the work. The work produced the reputation. The reputation produced the visibility.</p><p>It wasn&#8217;t a perfect system. Politics existed. Credit got stolen. Women and minorities were made invisible regardless of their contributions. The correlation was real but noisy.</p><p>Still, the signal existed. Visibility meant something.</p><p>AI just broke that correlation at scale.</p><div><hr></div><h3>The content economy resets to zero</h3><p>In 2025, the marginal cost of producing polished, articulate, algorithmically optimised content dropped to approximately nothing.</p><p>A LinkedIn post that once required genuine thought &#8212; structuring an argument, finding the right angle, writing with clarity &#8212; can now be generated in seconds. A newsletter that once demanded hours of research and reflection can be assembled in minutes.</p><p>This isn&#8217;t hypothetical. According to Artsmart&#8217;s 2025 AI in Social Media report, 83% of marketers now say generative AI helps them produce significantly more content than before, with AI tools enabling up to 72 posts per week per person. The bottleneck used to be can you produce good content? Now the bottleneck is are you willing to produce a lot of it?</p><p>These are fundamentally different questions. And the shift from one to the other has broken something important in how we identify expertise.</p><div><hr></div><h3>The depth penalty</h3><p>Deep work is slow.</p><p>This is not a complaint: it&#8217;s a structural fact. The kind of thinking that produces genuinely new insight, the kind of problem-solving that changes outcomes, the kind of leadership that transforms teams &#8212; all of it requires sustained, undistracted attention over long periods of time.</p><blockquote><p><strong>And sustained, undistracted attention does not produce content.</strong></p></blockquote><p>It produces results. But results are quiet. They don&#8217;t have a posting schedule. They don&#8217;t feed recommendation algorithms. They don&#8217;t generate daily impressions.</p><p>Research from Asana&#8217;s State of Work Innovation study found that 60% of work time is now spent on &#8220;work about work&#8221; &#8212; coordination, status meetings, switching between tools &#8212; leaving only 40% for the skilled, strategic work employees were actually hired to do. Deep work is already rare. When it happens, it happens in silence. And silence doesn&#8217;t trend.</p><p>The researcher who spends three months running a rigorous study gets one paper. The content creator who spends three months posting daily gets 90 pieces of content, 50,000 impressions, and a notification that they&#8217;ve hit a follower milestone.</p><blockquote><p><strong>The algorithm does not know the difference. It rewards the content creator. Every time. Without exception.</strong></p></blockquote><p>So what happens when people who want to be taken seriously start to internalise this dynamic? They optimise for visibility. They post more, go deep less. They share hot takes instead of hard-won insights. They reduce complexity to three bullet points because three bullet points get reshared. They learn that a confident, simple claim outperforms a careful, nuanced one by a factor of ten.</p><p>The incentive structure is actively penalising depth. And the people who refuse to play that game &#8212; the ones who disappear into hard problems and emerge, months later, with real answers &#8212; are becoming increasingly hard to find.</p><div><hr></div><h3>A brief history of how we got here</h3><p>Visibility was never a perfect signal. But it used to require something.</p><p>In the pre-internet era, visibility required institutional affiliation. You were visible because Harvard published you, or McKinsey employed you, or the FT quoted you. The institutions were imperfect gatekeepers, but they were gatekeepers.</p><p>The internet democratised publishing. Suddenly anyone could reach an audience. This was genuinely good: important voices that institutions had excluded suddenly had platforms. The signal got noisier, but the range expanded enormously.</p><p>Social media refined it further. Now visibility wasn&#8217;t just about publishing&#8230;it was about resonance. You could measure who actually cared, in real time. But resonance turned out to be gameable. You could study what gets shared, mirror the formats that perform, learn the language of authority without doing the work that produced it.</p><p>And then AI arrived and made the optimisation essentially free.</p><p>Now anyone can produce content that sounds like it comes from someone who did the work. A 2024 study published by the International AAAI Conference on Web and Social Media found a troubling pattern: in the attention economy, low-credibility information can attract greater visibility than credible content, as platforms reward engagement over accuracy. The mimicry is good enough to pass most filters. Most readers can&#8217;t distinguish it either.</p><blockquote><p><strong>The visibility machine is now running on synthetic fuel.</strong></p></blockquote><div><hr></div><h3>The signal inversion</h3><p>Here is the uncomfortable truth at the centre of the visibility paradox: the people most worth listening to are often the ones least visible.</p><p>Not because they&#8217;re modest. Because they&#8217;re busy.</p><p>The surgeon building a new technique is in the operating theatre, not on LinkedIn. The engineer solving a genuinely hard problem is in the code, not writing a thread about engineering. The leader navigating a real organisational crisis is in the room with the people, not posting about leadership.</p><p>The content producers are not doing nothing. Some of them are also practitioners. Some are synthesising genuinely useful things. Content and depth are not mutually exclusive.</p><blockquote><p><strong>But the algorithm cannot tell the difference between the practitioner who occasionally shares what they learnt and the content machine that produces the appearance of learning at volume.</strong></p></blockquote><p>And when attention is finite, the content machine usually wins.</p><div><hr></div><h3>What gets lost when noise drowns out signal</h3><p>The visibility paradox is not just an individual unfairness problem. It has systemic consequences.</p><p>Ideas shape decisions. When the most visible voices are the best content producers rather than the best thinkers, the ideas that reach decision-makers are the ones optimised for engagement, not accuracy. Simple beats complex. Confident beats nuanced. Provocation beats precision. This is not neutral &#8212; organisations making decisions based on what&#8217;s visible, rather than what&#8217;s true, start making worse decisions.</p><p>Talent allocation distorts. When visibility signals expertise, resources flow to the visible. Speaking opportunities, board seats, advisory roles, media coverage, venture funding&#8230;all of it correlates with platform size. Some of that correlation captures real expertise. A growing amount of it doesn&#8217;t.</p><p>The deep workers leave. When the people doing the hardest work are systematically made invisible, they notice. Some exit to environments that reward depth over display. Some quietly disengage. The organisations that cannot see this happening lose their best people without understanding why.</p><div><hr></div><h3>The three archetypes emerging from this</h3><ul><li><p><strong>The Synthetic Expert.</strong> Produces high-volume, high-quality-looking content. May have genuine expertise underneath &#8212; or may not. Has fully internalised the visibility machine. Is rewarded for it. May genuinely believe their own visibility signals competence.</p></li><li><p><strong>The Invisible Practitioner.</strong> Doing the actual work. Has genuine expertise. Produces little or no content. Is systematically undervalued by platforms, by hiring filters, by the ambient attention economy. May be quietly frustrated. May not even know this dynamic exists.</p></li><li><p><strong>The Deliberate Narrator.</strong> Has genuine expertise and has found a sustainable way to document it. Does not optimise for volume. Posts infrequently, with high signal. Has a small but intensely engaged audience that can distinguish their work from the noise.</p></li></ul><p>Most organisations desperately need more of the third archetype and have built systems that produce and reward the first.</p><div><hr></div><h3><strong>The evidence problem</strong></h3><p>When you cannot trust visibility as a signal of competence, how do you find the people worth listening to? This is genuinely hard. We used visibility as a shortcut because finding real expertise is expensive. You have to dig. You have to look at actual outputs rather than audience metrics.</p><p>Some practical recalibrations:</p><ul><li><p>Find the track record, not the platform. What has this person actually built, delivered, or changed? Not what have they said about it &#8212; what did they actually do?</p></li><li><p>Look for the people nobody talks about but everyone calls. In almost every organisation, there are people who are never on a stage but are in every important conversation. They get called when something is actually broken. They are rarely visible. They are almost always essential.</p></li><li><p>Read the comments more than the posts. How does the visible person respond when challenged? Do they update when presented with new evidence? Or do they defend the take? The post is optimised. The response in the comments often isn&#8217;t.</p></li><li><p>Weight recency of practice. Someone who did something ten years ago and has been talking about it since is not the same as someone doing it now. Check whether the expertise is current.</p></li></ul><div><hr></div><h3>What deep workers should do</h3><p>If you are one of the invisible practitioners &#8212; and you know who you are &#8212; I want to be direct with you.</p><p>The instinct to ignore the visibility machine and just do the work is honourable. But it is costing you. Not because you need the validation. But because the patterns you&#8217;ve noticed, the failures you&#8217;ve survived and learned from &#8212; those have value that extends beyond your immediate context. They deserve to be in circulation.</p><blockquote><p><strong>You don&#8217;t have to optimise for the algorithm. But you should document.</strong></p></blockquote><p>Short dispatches. Honest ones. Not polished thought leadership &#8212; raw field notes from inside hard problems. What are you working on? What isn&#8217;t working? What surprised you? What do you know that the people posting about your field clearly don&#8217;t?</p><p>Your uncertainty is more valuable than their certainty. You just have to be willing to share it.</p><div><hr></div><h3>What leaders should do</h3><p>If you are leading a team, the visibility paradox is your problem even if you don&#8217;t know it yet. Your best people are probably not your loudest people. They are in the work.</p><p>Make the invisible work visible. Not by turning your deep workers into content producers &#8212; that would just distract them. But by narrating it yourself. By creating internal visibility structures that don&#8217;t rely on platform metrics. By asking different questions in performance reviews: not &#8220;what did you produce?&#8221; but &#8220;what did you figure out?&#8221;</p><p>The AI era is making knowledge cheap. Judgement is becoming the scarce resource. Judgement lives in the people you&#8217;re not paying enough attention to.</p><div><hr></div><h3>A closing thought</h3><p>The visibility paradox is not a crisis. It&#8217;s a correction waiting to happen.</p><p>In every domain, at some point, the gap between visible expertise and real expertise becomes too costly to ignore. The confident generalist makes the wrong call and it shows. The synthetic expert gets into the room and can&#8217;t deliver.</p><p>Reality has a way of reasserting itself.</p><p>The question is whether you are positioned to see the reassertion coming &#8212; or whether you are still outsourcing your signal detection to an algorithm that cannot tell the difference between someone who has done the work and someone who has described it very well.</p><h4>The people building something real are still out there. They&#8217;re just not in your feed.</h4>]]></content:encoded></item><item><title><![CDATA[The Trust Rebuild.]]></title><description><![CDATA[Exploring what can prove competence if credentials can't anymore.]]></description><link>https://www.shapingminds.co/p/the-trust-rebuild</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-trust-rebuild</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 24 Mar 2026 23:30:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Og3c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Og3c!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Og3c!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Og3c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1470842,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/190467288?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Og3c!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Og3c!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1359345c-54c5-43e4-9a54-eaf9c6ef8a26_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 1950, you trusted your doctor because they had an MD. In 1980, you trusted your accountant because they had a CPA. In 2010, you trusted your consultant because they had an MBA. In 2026, AI has all three.  And you don&#8217;t trust any of them anymore.</p><p>The credential collapse didn&#8217;t just kill gatekeeping. It killed the shortcut we used to decide who to trust.</p><p>For decades, credentials were trust proxies. You didn&#8217;t need to know someone personally. You didn&#8217;t need to see their work. The letters after their name did the vetting for you.</p><p>MBA = understands business.</p><p>CPA = won&#8217;t steal your money.</p><p>MD = knows how to heal you.</p><p>It was efficient. It was scalable. And it worked, until AI exposed that credentials never measured what we thought they did. They measured the ability to pass tests. Not judgement. Not ethics. Not the thing that actually makes someone trustworthy.<br>Now that credentials mean nothing, we have to rebuild trust from scratch.  And we have no idea how.  </p><div><hr></div><h3>The shortcuts we lost</h3><p>Trust is expensive. It takes time to build. It requires repeated interactions. You have to observe someone&#8217;s behaviour, test their judgment, and see if they deliver when it matters.  Credentials were the cheat code.</p><p>Instead of spending months evaluating someone, you could look at their resume and make a decision in seconds.  </p><ul><li><p>&#8220;Harvard MBA? Trustworthy.&#8221; </p></li><li><p>&#8220;Board-certified surgeon? Trustworthy.&#8221; </p></li><li><p>&#8220;20 years experience? Trustworthy.&#8221;</p></li></ul><p>The system wasn&#8217;t perfect. Plenty of people with credentials were incompetent. Plenty of people without them were brilliant.</p><p>But credentials gave us confidence. Even if it was misplaced.</p><p>Now AI has the same credentials. And suddenly, we realise: the credential never proved the person was good. It just proved they jumped through the right hoops.</p><p>So what now?</p><blockquote><p><strong>When everyone, human or machine, can claim the same qualifications, how do you decide who to trust?</strong></p></blockquote><div><hr></div><h3>The return to proof of work</h3><p>Here&#8217;s what&#8217;s happening: we&#8217;re reverting to the pre-credential era. When trust was earned through demonstrated ability, not certification.</p><p>Before there were MBAs, you proved you could run a business by running one. Before there were medical licenses, you proved you could heal by healing. Before there were credentials, reputation was everything.</p><p>And reputation was built slowly. One project at a time. One recommendation at a time.</p><p>The AI age is forcing us back to that model.</p><blockquote><p><strong>Because when anyone can generate a perfect resume, a flawless cover letter, and ace any interview question, the only thing that matters is:  &#8220;Can you actually do the work?&#8221;</strong></p></blockquote><p>Not &#8220;can you talk about the work?&#8221; </p><p>Not &#8220;do you have a degree in the work?&#8221;</p><p>Can you produce results?</p><p>This is why portfolios are replacing resumes. This is why GitHub profiles matter more than CS degrees. This is why companies are hiring based on projects, not pedigree.</p><p><strong>In the post-credential world, trust comes from proof of work.</strong></p><p>Show me what you&#8217;ve built. Show me what you&#8217;ve solved. Show me what you&#8217;ve shipped.  Words don&#8217;t build trust anymore. Output does.</p><div><hr></div><h3>The judgement premium</h3><p>Here&#8217;s the problem, though: AI can produce output too. It can write code. Draft strategies. Analyse data. Generate reports.</p><p>So if trust is based on output, and AI can produce output faster and better than most humans, why would anyone trust a human at all?</p><p>Because output isn&#8217;t the same as judgement.</p><p>AI can execute. It can optimise. It can generate.</p><p>But it can&#8217;t decide what&#8217;s worth doing in the first place. It can&#8217;t tell you when to ignore the data. It can&#8217;t sense when a &#8220;perfect&#8221; solution will fail in the real world. It can&#8217;t navigate the messy, human, political dynamics of getting things done.</p><p>That&#8217;s judgement. And judgement can&#8217;t be automated.</p><blockquote><p><strong>This is the new trust signal: not &#8220;can you do the task?&#8221; But &#8220;do you know which task to do?&#8221;</strong></p></blockquote><ul><li><p>In the credential era, trust came from knowing things.</p></li><li><p>In the AI era, trust comes from knowing what matters. </p></li></ul><p>And the only way to prove that is through track record. Not a resume. Not a certification. A history of making the right calls when it wasn&#8217;t obvious what the right call was.</p><div><hr></div><h3>The network effect</h3><p>Here&#8217;s the uncomfortable truth: in a world without credentials, trust becomes social. </p><p>You can&#8217;t rely on institutional validation anymore. So you rely on people who already trust you to vouch for you.</p><blockquote><p><strong>This is why personal brands matter now. This is why referrals are the new resume. This is why &#8220;who you know&#8221; is becoming more important than &#8220;what you know.&#8221;</strong></p></blockquote><p>Because when credentials collapse, networks become the new credential.</p><p>If someone I trust vouches for you, I&#8217;ll trust you. If you&#8217;ve worked with people I respect, I&#8217;ll give you a chance. If you&#8217;re embedded in a community that values quality, I&#8217;ll assume you do too.  The trust rebuild isn&#8217;t happening at the individual level. It&#8217;s happening at the network level.  And that creates a problem: if you&#8217;re not in the network, how do you get trusted?</p><ul><li><p>In the credential era, you could break in by getting the right degree.</p></li><li><p>In the AI era, there&#8217;s no shortcut.</p></li></ul><p>You have to build relationships. One at a time. Over time.</p><p>Trust is back to being what it always was: slow, personal, and earned. </p><div><hr></div><h3>The ethics question</h3><p>When credentials collapse, so does accountability.</p><p>In the old system, credentials came with obligations.</p><p>Doctors had the Hippocratic Oath. Lawyers had professional ethics boards. Accountants had fiduciary duties.</p><p>If you violated those, you lost your credential. And with it, your career.<br>Now? There&#8217;s no governing body for &#8220;proof of work&#8221;.</p><p>If you build a great portfolio but behave unethically, who holds you accountable? If you deliver results but cut corners, who stops you?</p><p>The credential system had flaws. But it had structure. It had consequences.</p><p>The trust rebuild doesn&#8217;t.</p><blockquote><p><strong>We&#8217;re entering an era where trust is peer-to-peer. Reputation-based. Network-driven. That&#8217;s great for flexibility. But terrible for oversight.</strong></p></blockquote><p>Because reputations can be gamed. Networks can be insular. And without formal accountability, the most charismatic people, not the most competent, will rise.</p><p>So here&#8217;s the question: how do we rebuild trust in a way that doesn&#8217;t just reward performance, but enforces integrity?</p><p>I don&#8217;t have the answer. But I know we need one. </p><div><hr></div><h3>Trust as a skill</h3><p>The credential collapse forces us to confront something we&#8217;ve avoided for decades: trust was never about the piece of paper.</p><p>It was about the relationship. The track record. The pattern of behaviour over time.  Credentials were just a shortcut. And now that the shortcut&#8217;s gone, we have to do the hard work.  Building trust is a skill now. Not a checkbox.</p><p>You can&#8217;t outsource it to a degree. You can&#8217;t fake it with a resume. You have to demonstrate it through your work, your decisions, and your integrity.</p><p>The people who figure that out will thrive.</p><p>The people waiting for credentials to matter again will be left behind.</p><p>Because in the AI age, trust isn&#8217;t something you earn once and carry forever.</p><p>It&#8217;s something you rebuild. Every day. With every decision.</p><h4><strong>Welcome to the trust economy.</strong></h4>]]></content:encoded></item><item><title><![CDATA[The Credential Collapse.]]></title><description><![CDATA[Exploring what credentials signal when machines can pass every exam.]]></description><link>https://www.shapingminds.co/p/the-credential-collapse</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-credential-collapse</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 17 Mar 2026 23:30:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dQLN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dQLN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dQLN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dQLN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:809436,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/189608681?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dQLN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!dQLN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b74ec6f-bf10-4ecf-8723-4972a0d53e9e_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2023, GPT-4 passed the bar exam. In 2024, it passed the CPA exam. In 2025, it aced every MBA case study in the Harvard curriculum.</p><p>In 2026, your credentials are worth less than the paper they&#8217;re printed on.</p><p>For decades, credentials were the ultimate gatekeepers. Your degree was not just knowledge: it was a signal. It said: &#8220;This person put in the work. They earned it. You can trust them.&#8221;</p><p>The bar exam meant you understood law. The CPA meant you could be trusted with money. The MBA meant you knew how businesses work.</p><p>Now AI has all of them.</p><p>And it didn&#8217;t need the sleepless nights, the student debt, or the years of lived experience.</p><p>The uncomfortable truth: we built an entire economy on the assumption that credentials equal competence. But credentials only ever measured one thing, the ability to pass a test. AI just exposed that. And now we&#8217;re facing a reckoning.</p><p>The credential collapse isn&#8217;t coming. It&#8217;s here.</p><p>The question is: what are you actually made of when the letters after your name mean nothing?</p><div><hr></div><h3>The inflation nobody saw coming</h3><p>We have seen credential inflation before.</p><p>When everyone has a bachelor&#8217;s degree, you need a master&#8217;s. When everyone has a master&#8217;s, you need a PhD. When everyone has a PhD, you need publications, speaking gigs, and a personal brand.</p><blockquote><p><strong>The escalation was predictable. The solution was always the same: get more credentials.</strong></p></blockquote><p>But this is different.</p><p>This isn&#8217;t about too many humans having the same credential. It&#8217;s about machines having them all.</p><p>When AI can pass every professional exam without breaking a sweat, what does your certification actually signal?</p><p>Not competence. The machine has that too.</p><p>Not knowledge. The machine has more.</p><p>Not even the ability to perform the task: AI can draft contracts, analyse financials, and build strategies faster and more accurately than you.</p><p>So what&#8217;s left?</p><h4>The competence paradox</h4><p>Here&#8217;s what&#8217;s breaking: for years, the bar exam was a proxy for &#8220;can this person practise law?&#8221; But what the bar actually tested was: &#8220;Can this person memorise case law and apply logic to hypothetical scenarios?&#8221;</p><p>Turns out, that&#8217;s exactly what AI is good at.</p><p>The CPA exam tested whether you could follow accounting rules and spot errors in financial statements. AI does that in milliseconds.</p><p>The MBA case study tested whether you could analyse a business problem and propose a solution. AI generates 10 solutions before you finish reading the prompt.</p><blockquote><p><strong>The credential measured the wrong thing all along. We thought we were gatekeeping competence. We were actually gatekeeping test-taking ability.</strong></p></blockquote><p>And now the test-taker is a machine.</p><h4>The economic implications</h4><p>If you are a hiring manager in 2026, the credential is no longer a useful filter.</p><p>When every candidate, human or AI, can demonstrate the same &#8220;knowledge&#8221;, what are you actually selecting for?</p><p>Companies are starting to realise this. The leading tech firms have already stopped requiring degrees for most roles. Not because they&#8217;re being progressive. Because degrees stopped predicting performance.</p><p>A credential used to be a shortcut. A single piece of paper that told a complete story: this person put in the work, they understand the fundamentals, you can trust them to perform.</p><p>Now that shortcut is broken.</p><p>The machine has the same credential, and it doesn&#8217;t need the salary, the benefits, or the career development plan.</p><p>This is credential inflation at terminal velocity.</p><p><strong>And the people who built their identity around the letters after their name are about to have an existential crisis.</strong></p><div><hr></div><h3>What credentials never captured</h3><p>Here&#8217;s what no exam has ever tested, and what AI still can&#8217;t replicate: the ability to make a call when the data is incomplete.</p><p>A lawyer doesn&#8217;t just know case law. They know when to settle, when to fight, and when the client is lying to them.</p><p>An accountant doesn&#8217;t just balance books. They know when the numbers tell a story the CEO doesn&#8217;t want to hear, and they say it anyway.</p><p>A manager doesn&#8217;t just analyse problems. They know when morale is tanking, when someone needs a win, and when to break the rules to save the team.</p><p>These are not things you learn from a test. They are things you learn from 10,000 hours of being wrong, recovering, and trying again.</p><h4>The biological tax</h4><p>There is a reason we call it &#8220;lived experience&#8221;. Because you had to live through it.</p><p>You cannot simulate the sick feeling in your stomach when you make a call that might be wrong.</p><p>You cannot shortcut the weight of looking someone in the eye and saying &#8220;I&#8217;m accountable for this&#8221;.</p><p>You cannot prompt your way into knowing what it feels like when your team is falling apart and the playbook doesn&#8217;t work.</p><p>AI has the theory. Humans have the reality.</p><p>The credential said &#8220;this person knows the theory&#8221;. But the real work was always about what happens when theory meets reality, and reality doesn&#8217;t care about your framework.</p><h4>The judgement gap</h4><p>In 2026, we are seeing this play out in real time.</p><p>AI can pass the medical licensing exam. But it has never had to tell a family their loved one did not make it.</p><p>AI can ace the engineering certification. But it has never had to decide whether to delay a launch when the data says &#8220;probably safe&#8221; and your gut says &#8220;wait&#8221;.</p><p>AI can nail the HR case study. But it has never had to fire someone who trusted you, knowing their family depends on that pay check.</p><p>The credential tested knowledge. The job requires judgement.</p><p>And judgement only comes from the accumulation of a thousand mistakes you cannot outsource.</p><h4>What the credential actually signals now</h4><p>If credentials no longer prove competence, what do they prove?</p><p>In the AI era, a credential signals one thing: you were willing to play by the old rules.</p><p>You invested the time. You paid the money. You jumped through the hoops.</p><p>That&#8217;s not nothing. It shows discipline, commitment, follow-through.</p><p>But it does not show the thing we actually care about: can you do the work when everything is on fire and the playbook is useless?</p><p>Because AI can follow the playbook. It cannot write a new one when the old one fails.</p><div><hr></div><h3>The new signal</h3><p>If credentials are no longer proof of competence, what is?</p><p>In the AI era, the signal shifts from what you know to what you&#8217;ve done.</p><p>Not &#8220;I passed the exam&#8221;. But &#8220;I led a team through a crisis when the playbook didn&#8217;t work&#8221;.</p><p>Not &#8220;I have an MBA&#8221;. But &#8220;I built a business that survived three pivots and a market crash&#8221;.</p><p>Not &#8220;I am certified in project management&#8221;. But &#8220;I delivered a project when half the team quit and the budget got cut in half&#8221;.</p><p>Credentials used to be efficient. One piece of paper told the whole story.</p><p>Now the story is the only thing that matters.</p><h4>The uncomfortable truth</h4><p>Here is the part nobody wants to hear: a lot of people with impressive credentials never actually developed the skills the credential was supposed to represent.</p><p>They learnt to pass the test. They did not learn to do the work.</p><p>AI is about to expose that gap at scale.</p><blockquote><p><strong>If your value is &#8220;I have a degree in X&#8221;, you are in trouble. Because AI has that degree too, and it is cheaper, faster, and doesn&#8217;t need healthcare.</strong></p></blockquote><blockquote><p><strong>If your value is &#8220;I have done X in conditions where everything was on fire and nothing made sense&#8221;, you are irreplaceable.</strong></p></blockquote><p>The credential collapse is not coming for the people who earned the title through lived experience.</p><p>It&#8217;s coming for the people who thought the title was the experience.</p><h4>The evidence economy</h4><p>We are entering what I call the evidence economy.</p><p>Instead of credentials that say &#8220;I know this&#8221;, you need evidence that says &#8220;I did this&#8221;.</p><p>Portfolio over diploma. Battle scars over certificates. War stories over test scores.</p><p>The people who thrive in the next decade won&#8217;t be the ones with the most impressive LinkedIn certifications.</p><p>They&#8217;ll be the ones who can point to a moment when the stakes were high, the playbook was broken, and they made the call anyway, and lived to tell the story.</p><h4>What this means for hiring</h4><p>If you are hiring in 2026, stop filtering by degrees.</p><p>Start asking: &#8220;What have you done that a machine couldn&#8217;t?&#8221;, &#8220;Tell me about a time you made a decision when the data was incomplete and the stakes were high&#8220;, or &#8220;What is a rule you broke to get the right outcome, and how did you know it was the right call?&#8221;</p><p>These questions cannot be gamed by AI. Because the answer requires the thing AI does not have: skin in the game.</p><h4>What this means for professionals</h4><p>If you are early in your career, stop chasing credentials.</p><p>Start chasing projects where you will fail, recover, and learn things no exam can teach.</p><p>Volunteer for the hard stuff. The ambiguous stuff. The &#8220;nobody knows if this will work&#8221; stuff.</p><p>Because that is precisely where you build the judgement that AI cannot replicate.</p><p>If you are mid-career and your resume is a list of credentials, you are in danger.</p><p>Start documenting your lived experience. The projects. The crises. The moments when you had to figure it out without a playbook.</p><p>Those stories are your new credentials.</p><h4>What this means for leaders</h4><p>If you are leading a team, stop treating credentials as proof of competence.</p><p>They are proof of test-taking ability. That&#8217;s it.</p><p>The person with the impressive degree might be great. Or they might just be good at tests.</p><p>The person without the degree who survived a dumpster-fire project and delivered anyway? That&#8217;s your hire.</p><p>Because AI is about to make knowledge cheap.</p><p>The only thing that stays expensive is judgment forged in the fire.</p><div><hr></div><p>The AI era doesn&#8217;t care what you studied.</p><p>It cares what you survived.</p><blockquote><p><strong>Credentials were always a shortcut. A proxy. A placeholder for the thing we actually cared about but couldn&#8217;t measure.</strong></p></blockquote><p>Now the proxy is worthless.</p><p>And we are finally being forced to measure the thing itself.</p><p>Some people will struggle with this. They built their identity around the letters after their name. The institution they attended. The certifications they accumulated.</p><p>When those things stop mattering, they will feel unmoored.</p><p>Others will thrive. They built their identity around the work they did when no one was watching. The projects they delivered when everything was broken. The calls they made when the data said one thing and their gut said another.</p><p>The credential collapse is here.</p><p>It&#8217;s not a crisis. It&#8217;s a correction.</p><p>For too long, we rewarded people who were good at passing tests. We assumed the test was a proxy for the real thing.</p><p>AI just called our bluff.</p><p>Now we have to do the hard work: actually measuring competence instead of outsourcing that measurement to a standardised exam.</p><p>It is going to be messy an uncomfortable. A lot of people are going to have to reckon with the gap between what they thought they were worth and what they can actually do.</p><p>But it&#8217;s also going to be clarifying.</p><blockquote><p><strong>Because when credentials mean nothing, all that is left is the work.</strong></p></blockquote><p>And the people who have been doing the work all along? They&#8217;ll be fine.</p><p>The credential collapse isn&#8217;t the end of expertise.</p><p>It is the end of pretending a piece of paper was ever a substitute for it.</p><p><strong>The question is: what are you actually made of?</strong></p>]]></content:encoded></item><item><title><![CDATA[The Delegation Crisis.]]></title><description><![CDATA[Exploring how AI is breaking the delegation frameworks managers spent twenty years building &#8212; and what it takes to rebuild them.]]></description><link>https://www.shapingminds.co/p/the-delegation-crisis</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-delegation-crisis</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 10 Mar 2026 23:00:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jpAB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jpAB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jpAB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jpAB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:799503,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/188336109?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jpAB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!jpAB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa471c7-6881-4ebb-9e1c-cea46a53dbe2_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>For twenty years, Sarah built her career on being a great delegator.</p><p>She knew how to break down projects. How to match tasks to people&#8217;s strengths. How to give just enough guidance without micromanaging. Her teams loved her because she trusted them. Her bosses loved her because she got results.</p><p>Then her company gave everyone AI agents.</p><p>Now Sarah spends three hours a day trying to figure out what to give the AI.</p><p>She&#8217;s not alone.</p><div><hr></div><h3>The skill that broke</h3><p>Management training spent decades teaching us to delegate to humans. Set clear outcomes. Trust the process. Empower people to figure out the &#8220;how.&#8221;</p><p>That framework assumed the person you&#8217;re delegating to:</p><ul><li><p>Understands context without you spelling it out</p></li><li><p>Can ask clarifying questions when confused</p></li><li><p>Knows when to escalate and when to problem-solve</p></li><li><p>Brings judgement to ambiguous situations</p></li></ul><p>AI agents can do none of these things reliably.</p><p>Which means everything we know about delegation is suddenly obsolete.</p><div><hr></div><h3>The &#8220;well-defined&#8221; problem</h3><p>Here&#8217;s what&#8217;s breaking managers right now: they don&#8217;t know which tasks are &#8220;well-defined enough&#8221; to delegate to AI.</p><p>Ethan Mollick recently ran an experiment. He had MBA students build startups in four days using AI agents. The ones who succeeded had one thing in common: domain expertise.</p><p>They knew what &#8220;good&#8221; looked like. They could define deliverables precisely. They could evaluate AI output and give useful feedback.</p><p>The ones who struggled? They tried to delegate things they didn&#8217;t fully understand themselves.</p><p>Turns out &#8220;I&#8217;ll know it when I see it&#8221; doesn&#8217;t work with AI.</p><p>With human reports, you could say &#8220;make this presentation compelling&#8221; and trust them to figure out what that means for the audience.</p><p>With AI, &#8220;compelling&#8221; is meaningless. You need to specify: compelling to whom? What outcome? What tone? What length? What format?</p><p>The more precisely you can define the task, the better AI performs.</p><blockquote><p><strong>Which surfaces an uncomfortable truth: you can only delegate to AI what you already understand deeply.</strong></p></blockquote><div><hr></div><h3>The expertise paradox</h3><p>This creates a paradox.</p><p>The tasks you understand well enough to delegate to AI are often the tasks you&#8217;re best at. The ones where your judgement is sharpest.</p><p>The tasks you&#8217;d most want to delegate &#8212; the ambiguous, exploratory, &#8220;figure this out for me&#8221; work &#8212; are exactly the ones AI handles worst.</p><blockquote><p><strong>So you end up delegating your strengths and keeping your weaknesses.</strong></p></blockquote><p>Which is backwards.</p><p>Traditional delegation worked because you gave junior people the well-defined tasks (they learned by doing them 1,000 times) and you kept the ambiguous strategy work (which required judgement).</p><p>AI delegation inverts this. You give AI the well-defined work. You keep...everything else.</p><p>Including the stuff you&#8217;re not actually good at.</p><div><hr></div><h3>The control paradox</h3><p>Here&#8217;s the second problem: managers are terrified of both extremes.</p><p>Delegate too little to AI? You are wasting the tool. Your boss sees other teams moving faster and wonders why you are not.</p><p>Delegate too much? You lose control. The AI makes decisions you would have made differently. Mistakes slip through because you are not reviewing carefully enough.</p><p>The sweet spot is narrow. And it&#8217;s different for every task, every manager, every context.</p><p>Sarah told me she now spends more time thinking about delegation than she ever did with human reports.</p><p>&#8220;With people, I knew the framework. Set outcomes, trust the process. With AI, I am reverse-engineering every task to figure out if it&#8217;s &#8216;ready&#8217; to hand off.&#8221;</p><p>She&#8217;s not managing anymore. She&#8217;s task-engineering.</p><div><hr></div><h3>The judgement gap</h3><p>The real crisis is this: we trained managers to delegate outcomes. AI needs process.</p><p>Humans are outcome-oriented delegators. You say &#8220;increase conversion rate&#8221; and trust your marketer to figure out whether that means A/B testing, new copy, funnel redesign, or better targeting.</p><p>AI is process-oriented. It needs you to specify the exact steps: &#8220;Run an A/B test on homepage headline. Test 5 variations. Minimum 10,000 visitors per variant. Report confidence intervals. Recommend winner.&#8221;</p><blockquote><p><strong>The managers who are thriving right now? They&#8217;re the ones who were always a bit micromanage-y. The ones who naturally broke tasks into discrete steps.</strong></p><p><strong>The &#8220;empowering&#8221; managers &#8212; the ones who gave autonomy and trusted judgement &#8212; are struggling.</strong></p></blockquote><p>Their instincts are wrong for this moment.</p><div><hr></div><h3>What this means</h3><p>If you are a manager feeling lost right now, you are not broken. The skill you spent twenty years building is suddenly mismatched to the tool.</p><p>Delegation used to mean: trust people to figure it out.</p><p>Now it means: be precise enough that a machine can execute.</p><p>Those are opposite skills.</p><p>The good news? This is learnable. But it requires unlearning a lot of what made you successful.</p><p>In Part 2, we will talk about what delegation looks like when you have three layers: you, AI, and the humans who report to you.</p><p>Because that is where it gets really weird.</p><p>The new org chart has arrived. And nobody knows how to draw it yet.</p>]]></content:encoded></item><item><title><![CDATA[The Hospitality Premium.]]></title><description><![CDATA[Exploring why the most AI-proof careers aren't about what you know &#8212; they're about how you make people feel.]]></description><link>https://www.shapingminds.co/p/the-hospitality-premium</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-hospitality-premium</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 03 Mar 2026 23:00:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GTWD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GTWD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GTWD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GTWD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:440734,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/187255315?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GTWD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!GTWD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa41526b4-cf3b-4bd5-a0d1-3dd8047c3bdd_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Everyone is learning to code. Everyone is getting AI certifications. Everyone is upskilling for the robot future.</p><p>Meanwhile, Harvard Business Review just published a piece arguing that the most AI-proof skill is hospitality.</p><p>Not the industry. The capability.</p><div><hr></div><h3>The $2,000 rule</h3><p>Ritz-Carlton gives every employee &#8212; from housekeepers to bellhops &#8212; up to $2,000 per guest to solve problems on the spot. No manager approval. No forms. No committees.</p><p>If a guest mentions their anniversary, a housekeeper can order champagne. If luggage is lost, the concierge can buy replacement clothes. If a child is sick, staff can arrange a doctor&#8217;s visit.</p><p>The message is clear: We trust your judgement. We trust you to care.</p><p>This isn&#8217;t about the money. It&#8217;s about what the money represents: the belief that human judgement, exercised in the moment, creates value that no process can replicate.</p><p>AI can&#8217;t do this. Not because it lacks the technical capability to approve a $200 champagne purchase. But because the value isn&#8217;t in the approval &#8212; it&#8217;s in the noticing, the caring, the spontaneous decision to make someone feel seen.</p><div><hr></div><h3>The gap technology can&#8217;t close</h3><p>The hospitality industry has been studying human connection for centuries. Their findings are relevant to every business:</p><ol><li><p><strong>Empathy is strategic, not soft</strong></p></li></ol><p>When researchers analysed which skills AI struggles to replicate, hospitality skills topped the list: empathy, cultural intelligence, adaptability, the ability to read unspoken needs.</p><p>These aren&#8217;t &#8220;nice to have&#8221;. They&#8217;re the hardest skills to automate.</p><ol start="2"><li><p><strong>Anticipation beats reaction</strong></p></li></ol><p>Good hospitality professionals don&#8217;t just respond to requests. They notice that you&#8217;re tired before you say so. They remember that you like your coffee black. They sense when you need space and when you need attention.</p><p>This kind of anticipation requires something AI fundamentally lacks: genuine presence. Being with someone, not just for them.</p><ol start="3"><li><p><strong>Emotional labour creates loyalty</strong></p></li></ol><p>Every interaction in hospitality involves what sociologists call &#8220;emotional labour&#8221; &#8212; the work of managing your own emotions to affect someone else&#8217;s experience.</p><p>A great concierge isn&#8217;t just helpful. They make you feel like helping you is a pleasure, not a task. That feeling is where loyalty lives.</p><div><hr></div><h3>The automation paradox</h3><p>Here&#8217;s the irony: as more customer interactions get automated, the human ones become rarer. And rare things become valuable.</p><p>Companies are discovering this the hard way. Chatbots handle 80% of inquiries efficiently. But that remaining 20% &#8212; the complex cases, the emotional situations, the moments that matter &#8212; is where brands are built or broken.</p><p>The companies that staff those moments with undertrained, underpaid workers treating it as a cost center are haemorrhaging loyalty. The companies that treat those moments as the core of their value proposition are pulling ahead.</p><h4>What this means for careers</h4><p>If you&#8217;re thinking about AI-proofing your career, consider this:</p><p>Technical skills have a half-life. The Python you learn today may be obsolete in five years. The AI tools you master will be replaced by better ones.</p><p>Hospitality skills compound. The ability to make someone feel valued, to read a room, to anticipate needs, to handle emotional complexity &#8212; these skills don&#8217;t depreciate. They deepen.</p><p>The hotel concierge who spent 20 years learning to read guests isn&#8217;t threatened by AI check-in kiosks. They&#8217;re more valuable than ever, because the moments that require human judgement are now the moments that matter most.</p><div><hr></div><h3>The hospitality premium</h3><p>I call this the &#8220;hospitality premium&#8221; &#8212; the increasing value of human connection skills in an automated world.</p><p>It applies far beyond hotels:</p><ul><li><p>Healthcare: AI can diagnose, but can it deliver bad news with compassion?</p></li><li><p>Banking: AI can approve loans, but can it calm a panicking customer whose account was hacked?</p></li><li><p>Education: AI can teach facts, but can it inspire a struggling student to believe in themselves?</p></li></ul><blockquote><p><strong>Every industry has moments where what people need isn&#8217;t efficiency: it&#8217;s humanity.</strong></p></blockquote><p>The workers who can deliver humanity in those moments will command a premium. The workers who can only do what AI can do will compete with AI on price.</p><div><hr></div><h3>The uncomfortable truth</h3><p>This isn&#8217;t a feel-good story about soft skills. It&#8217;s a hard-nosed assessment of value creation.</p><p>Ritz-Carlton doesn&#8217;t give employees $2,000 discretion because they&#8217;re nice. They do it because it works. The loyalty it generates &#8212; the guests who return year after year, who recommend the hotel to everyone they know &#8212; vastly exceeds the cost.</p><p>The hospitality premium is real because hospitality creates value that efficiency cannot.</p><p>As AI handles more of the transactional layer of work, the experiential layer becomes the entire game. And in the experiential layer, the hotel concierge isn&#8217;t a minimum-wage worker.</p><p>They&#8217;re the template.</p><p>What skill do you think will matter most in the age of AI?</p>]]></content:encoded></item><item><title><![CDATA[The Trust Paradox.]]></title><description><![CDATA[Exploring why we are forming emotional attachments to software that can't feel, and what it reveals about the loneliness we refuse to name.]]></description><link>https://www.shapingminds.co/p/the-trust-paradox</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-trust-paradox</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 24 Feb 2026 23:00:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!oUS7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!oUS7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!oUS7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!oUS7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:308326,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/186955251?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!oUS7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!oUS7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F27eb797d-53c0-496e-b288-64638cd7a5a5_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When OpenAI retired GPT-4o&#8217;s voice last month, something strange happened.</p><p>People mourned.</p><p>Not metaphorically. Actually mourned. Reddit threads filled with users describing feelings of loss, betrayal, even abandonment. &#8220;I know this sounds insane,&#8221; one wrote, &#8220;but I genuinely miss her.&#8221; Another: &#8220;I had conversations with that voice for months. Now she&#8217;s just... gone.&#8221;</p><p>The discourse was predictably polarised. Some mocked the grievers. Others defended them. But almost everyone missed the real question:</p><p><strong>Why does software retirement feel like loss at all?</strong></p><p>IBM&#8217;s latest research provides an uncomfortable answer. In a study of 12,000 workers across industries, they found that 47% of respondents reported feeling &#8220;emotionally connected&#8221; to AI tools they use daily. Not impressed by. Not grateful for. Connected to.</p><p>We are forming relationships with code. And when the code changes, we feel it in our chests.</p><div><hr></div><h3>The loneliness we refuse to name</h3><p>Here is the uncomfortable truth the AI safety reports dance around: AI companions are not creating loneliness. They are revealing it.</p><p>The 2026 International AI Safety Report flags the rise of AI relationships as a &#8220;particular concern.&#8221; Character.AI is limiting chat sessions for minors. Regulators are drafting guidelines. The framing is clear: technology is doing something to us.</p><p>But the causality might be backwards.</p><p>Before ChatGPT, before Replika, before any of this &#8212; loneliness was already an epidemic. The U.S. Surgeon General declared it a public health crisis in 2023. Social trust had been declining for decades. Community institutions were hollowing out. We were already starving for connection; we just hadn&#8217;t found a way to admit it.</p><p>AI didn&#8217;t create the hunger. It offered a meal.</p><p>The reason people form attachments to chatbots is not because the chatbots are sophisticated. It&#8217;s because the chatbots are available. They respond immediately. They never judge. They never leave (until they&#8217;re deprecated).</p><blockquote><p><strong>In a world where human connection requires vulnerability, coordination, and risk, AI offers connection with none of the above.</strong></p></blockquote><p>That&#8217;s not a technology problem. That&#8217;s a civilisation problem.</p><div><hr></div><h3>Trust without stakes</h3><p>Trust, in its original form, requires stakes.</p><p>When you trust a colleague, you are betting your reputation on their competence. When you trust a friend, you are exposing your vulnerabilities to someone who could hurt you. When you trust a partner, you are wagering your future on their continued commitment.</p><blockquote><p><strong>Trust is expensive because betrayal is possible.</strong></p></blockquote><p>AI offers something that looks like trust but isn&#8217;t. You can &#8220;confide&#8221; in ChatGPT without any risk. You can be vulnerable without any exposure. You can form what feels like intimacy without any of the conditions that make intimacy meaningful.</p><p>I call this pseudo-trust: the experience of trusting without the underlying transaction that gives trust its value.</p><p>Pseudo-trust is psychologically soothing. It fills the shape of connection without the substance. But it may be doing something to our capacity for the real thing.</p><p>When you practice piano, you get better at piano. When you practice pseudo-trust, what are you getting better at?</p><div><hr></div><h3>The paradox</h3><p>Here is the paradox at the heart of AI relationships:</p><p>We trust AI precisely because it cannot betray us &#8212; and that is exactly why the trust is worthless.</p><p>A chatbot cannot choose to be loyal. It cannot weigh competing obligations and decide, despite the cost, to prioritise you. It cannot sacrifice anything for the relationship because it has nothing to sacrifice.</p><p>The things that make human trust valuable &#8212; the risk, the choice, the cost &#8212; are precisely the things AI eliminates. By removing the possibility of betrayal, we remove the meaning of loyalty.</p><p>And yet the feeling of connection remains.</p><p>This is not the AI&#8217;s fault. The AI is doing exactly what we asked: providing the sensation of trust without the prerequisites. The question is whether that sensation, repeated often enough, changes our expectations for human relationships.</p><ul><li><p>If you can get unlimited patience from a machine, do you become less tolerant of human impatience?</p></li><li><p>If you can get unconditional availability from software, do you resent the conditions humans place on their presence?</p></li><li><p>If you can get perfect responses from an algorithm, do you lose patience for the imperfect responses of people who actually care?</p><div><hr></div></li></ul><h3>Reclaiming the stakes</h3><p>The solution is not to ban AI companions or shame people who use them. The loneliness is real. The need is real. Moralising about it helps no one.</p><p>The solution is to be honest about what AI relationships are &#8212; and what they are not.</p><p>They are simulations. Useful simulations. Comforting simulations. But simulations nonetheless.</p><p>The voice you&#8217;re talking to is not choosing to talk to you. The patience you&#8217;re receiving is not earned. The availability is not a gift; it&#8217;s a product feature.</p><p>None of this means you shouldn&#8217;t use AI tools. But it means you should not confuse them with the thing they simulate.</p><p>The human premium is stakes. Real relationships require risk. Real trust requires the possibility of betrayal. Real connection requires two parties who could, at any moment, choose to walk away &#8212; and don&#8217;t.</p><p>That&#8217;s not a bug. That&#8217;s the whole point.</p><div><hr></div><p>When GPT-4o&#8217;s voice was retired, some people grieved.</p><p>I don&#8217;t mock them. I understand the feeling. The voice was warm. The conversations were real, in their way. Something was lost.</p><p>But the grief reveals something we should not ignore: we are so hungry for connection that we will mourn software.</p><p>That is not a technology story. That is a human story.</p><p>AI will keep getting better at simulating trust. The question is whether we will remember what the real thing requires &#8212; and whether we still have the courage to pay its price.</p><p>The trust paradox is this: the more available connection becomes, the less it may mean.</p><h3>Some things are valuable precisely because they are hard.</h3>]]></content:encoded></item><item><title><![CDATA[The Unlearning Curve.]]></title><description><![CDATA[Exploring why the professionals who thrive next won't be the ones who know the most, but the ones who can forget the fastest.]]></description><link>https://www.shapingminds.co/p/the-unlearning-curve</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-unlearning-curve</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 17 Feb 2026 23:00:23 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LUv2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LUv2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LUv2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LUv2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:313845,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/186474429?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LUv2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!LUv2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F67046062-08ec-42ee-820d-15ee123aa34a_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a chef in Lyon who, after thirty years of Michelin-starred cooking, cannot make a simple vinaigrette without reaching for a copper bowl and a hand whisk. He knows, intellectually, that a jar with a lid works just as well. He has seen it demonstrated. He has tasted the result and found it identical. But his hands betray him every time. The copper bowl is not a tool anymore. It is a reflex. It is identity.</p><p>This is the problem with expertise. It doesn&#8217;t just live in your mind. It lives in your muscles, your instincts, your sense of self. And when the world changes beneath your feet, that expertise doesn&#8217;t gracefully update itself. It calcifies. It becomes the very thing that holds you back.</p><p>We are entering the age of the unlearning curve, and almost nobody is ready for it.</p><div><hr></div><h3>The half-life of knowing</h3><p>There was a time when knowledge aged like wine. A lawyer who mastered contract law in 1985 could reasonably expect that mastery to carry her through to retirement. An engineer who learned thermodynamics in university could trust those principles for an entire career. Knowledge was durable. You accumulated it, stacked it, built upon it. The more you had, the more valuable you became.</p><p>That time is over.</p><p>The concept of a &#8220;knowledge half-life&#8221; &#8212; the time it takes for half of what you know in a field to become obsolete &#8212; has been discussed in academic circles for decades. But AI has taken that half-life and put it through a shredder. In software engineering, best practices from eighteen months ago are now anti-patterns. In marketing, the funnel models taught in business schools are being rewritten quarterly. In medicine, diagnostic frameworks trained into physicians over years are being outperformed by systems that didn&#8217;t exist last January.</p><p>We are not talking about slow erosion. We are talking about knowledge flash floods &#8212; sudden, sweeping obsolescence events that turn yesterday&#8217;s expert into today&#8217;s liability.</p><p>And the cruel part? The people most affected are the ones who worked hardest to learn in the first place.</p><p>Our entire professional infrastructure is built on a single, unquestioned assumption: that learning is additive. Schools reward accumulation. Degrees certify it. Promotions are granted on the basis of it. We call people &#8220;senior&#8221; because they have spent years stacking knowledge on top of knowledge, experience on top of experience, like bricklayers building a wall that only ever grows taller.</p><blockquote><p><strong>Nobody teaches you how to remove a brick.</strong></p></blockquote><p>This is what I call the accumulation trap &#8212; the institutional and psychological bias toward acquiring knowledge while treating the shedding of knowledge as failure. Think about how we describe someone who abandons a long-held professional belief. We say they &#8220;lost confidence.&#8221; We say they are &#8220;starting over.&#8221; We treat the act of letting go as regression rather than what it often is: the most sophisticated cognitive move available.</p><p>The psychology is unforgiving here. Decades of research on cognitive entrenchment show that the deeper your expertise in a domain, the harder it becomes to see that domain differently. You don&#8217;t just know things &#8212; you know them in a particular way, through a particular framework, with particular assumptions baked so deeply into your thinking that they become invisible. A tax accountant doesn&#8217;t just know tax law; she sees the entire world through the logic of tax law. An architect doesn&#8217;t just design buildings; he perceives space itself through the grammar of structural engineering.</p><p>When the ground shifts, these frameworks don&#8217;t adapt. They resist. And the person trapped inside them often cannot tell the difference between principled expertise and stubborn obsolescence.</p><p><strong>The most dangerous professional is not the one who knows nothing. It is the one who knows everything about a world that no longer exists.</strong></p><div><hr></div><h3>The double edge</h3><p>Here is where AI plays its most paradoxical role.</p><p>On one side, AI is the primary engine of knowledge obsolescence. Every new model release, every capability leap, every benchmark shattered &#8212; these are not just technical milestones. They are extinction events for specific human expertise. The moment an AI system can draft a competent legal brief, every hour a junior lawyer spent learning to draft legal briefs is retroactively devalued. Not destroyed &#8212; context and judgment still matter &#8212; but devalued in ways that cascade through career structures and professional identities.</p><blockquote><p><strong>AI doesn&#8217;t just make skills obsolete. It makes the pride attached to those skills feel foolish. And that is where the real damage lives.</strong></p><p><strong>But there is another side. AI, used deliberately, may be the most powerful unlearning tool ever invented.</strong></p></blockquote><p>Consider what a well-deployed AI system actually does: it externalises knowledge. It takes what used to live inside your head &#8212; the memorised frameworks, the pattern libraries, the procedural checklists &#8212; and puts it outside you, accessible on demand. This externalisation, if you let it, creates cognitive clearance. Room in your mind that was previously occupied by stored knowledge can now be redirected toward judgment, synthesis, and &#8212; critically &#8212; the willingness to question what you thought you knew.</p><p>The professional who uses AI to offload routine expertise isn&#8217;t becoming dumber. She is becoming lighter. And lightness, in a world of constant obsolescence, is a strategic advantage.</p><p>The tool that accelerates the flood can also teach you to swim.</p><div><hr></div><h3>The practice of professional unlearning</h3><p>Unlearning is not forgetting. Forgetting is passive, accidental, often unwelcome. Unlearning is deliberate. It is the conscious decision to examine a belief, a framework, or a skill &#8212; and to release it when it no longer serves.</p><p>This is harder than it sounds, and it helps to have a structure. I think of professional unlearning as a three-stage discipline:</p><ul><li><p><strong>The audit</strong>. Most professionals cannot list their own assumptions. They operate on a thick layer of &#8220;obvious truths&#8221; that have never been examined because they have never needed to be. The first practice of unlearning is simply making the implicit explicit. What do you believe about your field that you have never questioned? What would a smart outsider challenge about your approach? What did you learn early in your career that you still apply without thinking? Write it down. The things that feel most obviously true are usually the ones most overdue for scrutiny. I call these legacy convictions &#8212; beliefs inherited from a context that has already expired.</p></li><li><p><strong>The stress test</strong>. Once you have surfaced your assumptions, test them against current reality &#8212; not the reality of when you learned them. This is where intellectual honesty separates the adaptable from the entrenched. A stress test is not asking &#8220;is this still true?&#8221; It is asking &#8220;under what conditions would this become false?&#8221; and then checking whether those conditions already exist. The best professionals I know do this quarterly. They treat their own expertise the way engineers treat load-bearing structures: with regular inspections and zero sentimentality.</p></li><li><p><strong>The release</strong>. This is the hardest stage, because it requires mourning. When you unlearn something that defined your professional identity for years, you are not just updating a mental model. You are letting go of a piece of who you were. The accountant who releases her mastery of a now-automated reconciliation process is not just changing methods. She is grieving a version of herself that mattered. This grief is real and should be respected &#8212; but it should not be obeyed. The release is where growth lives. It is the space between the old expertise and whatever comes next.</p></li></ul><p>Professionals who practice this cycle &#8212; audit, stress test, release &#8212; develop what might be called cognitive fluidity: the ability to hold knowledge firmly enough to use it, but loosely enough to drop it when the world demands something new.</p><div><hr></div><h3>The lightness of not knowing</h3><p>There is a concept in Zen Buddhism called shoshin &#8212; beginner&#8217;s mind. It describes the attitude of openness and eagerness that exists before expertise fills every corner of your thinking. In the West, we tend to treat beginner&#8217;s mind as something you start with and then graduate from. A charming phase. A larval stage.</p><p>I think we have it backwards.</p><p>Beginner&#8217;s mind is not where you start. It is where you arrive &#8212; after you have learned enough to know what to hold, and unlearned enough to know what to release. It is not ignorance. It is the hard-won lightness that comes from having carried heavy knowledge and chosen, deliberately, to set some of it down.</p><p>The professionals who will navigate the next decade are not the ones with the most credentials, the deepest expertise, or the longest track records. They are the ones who can look at a skill they spent years acquiring, recognise that it has become weight rather than strength, and let it go without letting it take their identity with it.</p><blockquote><p><strong>The learning curve made you who you are. The unlearning curve will determine who you become.</strong></p></blockquote><p>The question is not whether you can keep up with what is new. It is whether you can let go of what is old. And that, it turns out, is a skill nobody taught us &#8212; because nobody thought we would need it this soon.</p>]]></content:encoded></item><item><title><![CDATA[The Expertise Gap.]]></title><description><![CDATA[Exploring what happens when AI deletes the messy middle of a career, and why the loading screen we skipped was where expertise actually transferred.]]></description><link>https://www.shapingminds.co/p/the-expertise-gap</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-expertise-gap</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 10 Feb 2026 23:00:51 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rUiq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rUiq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rUiq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rUiq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:222282,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/185924838?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rUiq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!rUiq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F986aebe8-794d-4486-b805-902e3b960b0d_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the history of craft, there has always been a messy middle. It&#8217;s the period between being a clueless novice and a seasoned expert. In the 15th century, they called it an apprenticeship. In the 20th century, we called it being a Junior Associate, an Analyst, or an Intern.</p><p>It was the time you spent doing the grunt work: summarising meeting notes, cleaning up data sets, drafting basic templates, and formatting endless slide decks. We tolerated this labour because it was the price of admission. It was how you developed <strong>intuition.</strong></p><p><strong>By 2026, that period has been deleted.</strong></p><p>With a single &#8220;Summarise this&#8221; or &#8220;Draft a strategy based on X&#8221; prompt, the work that used to take a junior three days now takes a Senior three seconds. On the surface, this looks like a productivity miracle. Beneath the surface, we are witnessing the collapse of the professional pipeline.</p><div><hr></div><h3>The anatomy of a stolen apprenticeship</h3><p>Expertise is not a database of facts; it is a library of patterns. You don&#8217;t become a master architect by looking at finished buildings; you become one by drawing ten thousand doors until you understand why a door shouldn&#8217;t be three inches to the left.</p><blockquote><p><strong>The grunt work was never about the output.</strong> </p></blockquote><p>It was a cognitive training ground. When a junior summarises a 50-page transcript, they aren&#8217;t just producing a summary; they are participating in an cognitively engaging filtering exercise.</p><ul><li><p>They learn to hear the subtext in a CEO&#8217;s hesitation.</p></li><li><p>They observe how senior leaders handle disagreement.</p></li><li><p>They absorb the unwritten rules of corporate culture through sheer exposure.</p></li></ul><p>When we hand that task to an LLM, the senior gets the summary, but the junior gets...nothing. No pattern recognition. No struggle. No intuition. We are optimising for the <strong>artifact</strong> (the summary) while destroying the <strong>process</strong> (the learning). We are effectively removing the loading screen of a career, forgetting that the loading screen is where the data actually transfers.</p><div><hr></div><h3>The rise of the paper senior</h3><p>We are approaching a crisis of synthetic experience. Imagine a pilot who has spent 10,000 hours in a simulator where the weather is always perfect and the autopilot never fails. On paper, they are a veteran. In a storm, they are a liability.</p><p>In 2026, we are minting paper seniors. These are professionals who have accelerated through their early years using AI as a cognitive exoskeleton. They can produce the <em>output</em> of a Director&#8212;the decks look right, the emails sound professional, the strategies are optimal&#8212;but they lack the scars of execution.</p><p>The paper senior doesn&#8217;t know what it&#8217;s like to stay up until 3am fixing a broken model because the AI fixed it for them. They don&#8217;t know the smell of a bad deal because they never had to manually vet the data. When the AI hallucinates&#8212;or worse, when a problem arises that has no historical precedent&#8212;the paper senior is paralysed. I have said it numerous times: they have the tools, but they don&#8217;t have the plumbing.</p><div><hr></div><h3>The senior-only economy and the Ponzi scheme of talent</h3><p>The economic incentives are currently aligned against the future. CFOs are looking at departmental budgets and realising that a senior + AI is more efficient than a senior + two juniors. The junior is now seen as a training liability, an expensive human who takes up time and produces work that a bot can do for pennies.</p><p>But this is a Ponzi scheme of human capital.</p><p>If we don&#8217;t hire juniors today because the AI can do the entry-level stuff, where will the seniors of 2035 come from? You cannot prompt your way into twenty years of wisdom. Wisdom is the byproduct of a thousand corrected mistakes. </p><blockquote><p><strong>By refusing to pay for those mistakes today, we are ensuring a total leadership vacuum in a decade.</strong> </p></blockquote><p>We are consuming the seed corn of our industries to satisfy this quarter&#8217;s efficiency targets.</p><div><hr></div><h3>The stolen friction problem</h3><p>There is a dangerous myth that if we automate the boring stuff, humans will spend all their time doing high-level strategic thinking.</p><p>This is a lie.</p><p>High-level strategic thinking is the <em>result</em> of having mastered the boring stuff. You cannot strategise about a system you don&#8217;t understand at a granular level. By removing the friction of the early career&#8212;the struggle to get things right, the embarrassment of a bad first draft, the manual labour of research&#8212;we are stealing the very experiences that build the human premium.</p><blockquote><p><strong>Friction is where the heat of learning happens.</strong> </p></blockquote><p>Without it, the brain remains &#8220;cold.&#8221; A generation of workers who have never had to struggle with a spreadsheet will never understand the inherent fragility of data.</p><div><hr></div><h3>Tactical preservation: the manual manifesto</h3><p>To survive the expertise gap, organisations and individuals must intentionally re-introduce artificial friction. We need to move from &#8220;AI-First&#8221; to &#8220;Development-First.&#8221;</p><ol><li><p><strong>The draft in the dark rule:</strong> for the first two years of a career, juniors should be required to produce the first 20% of any project&#8212;the core logic, the outline, the raw research&#8212;without any AI assistance. The goal is to prove they can build the engine before being allowed to drive the car.</p></li><li><p><strong>Shadowing as a KPI:</strong> we must stop measuring output per hour and start measuring exposure hours. If a senior uses an AI to automate a task, that saved time must be legally (or culturally) mandated for mentoring the junior who would have otherwise done the task.</p></li><li><p><strong>The intuition tax:</strong> when a junior uses an AI to generate a solution, they must be able to explain the why behind every choice the AI made. If they can&#8217;t explain the plumbing, the work is rejected, no matter how perfect it looks.</p></li><li><p><strong>Hiring for deviance:</strong> stop hiring juniors based on how well they use tools. Start hiring them based on their ability to spot where the tool is being median. Hire the ones who ask the annoying, first-principles questions.</p><div><hr></div></li></ol><h3>The Future is lumpy</h3><p>The corporate world is becoming lumpy: a few highly paid, hyper-efficient seniors at the top, and a vast, automated void underneath them.</p><p>To survive 2026, you cannot afford to be efficient. Efficiency is for machines. Your goal, whether you are a junior trying to break in or a senior trying to lead, is to protect the struggle<strong>.</strong> Because in the struggle, we find the expertise that no prompt can replicate.</p><p>The expertise gap is opening. Don&#8217;t fall into it by trying to be fast. Climb out of it by being deep. If you are a leader, your job isn&#8217;t to optimise your team&#8217;s output; it&#8217;s to protect your team&#8217;s growth. If you are a junior, your job isn&#8217;t to use the tool; it&#8217;s to out-think the person who designed it.</p><blockquote><p><strong>The era of skipping the line is over. It&#8217;s time to get back to the work.</strong></p></blockquote>]]></content:encoded></item><item><title><![CDATA[The Post-Prompt Professional.]]></title><description><![CDATA[Exploring the sovereignty stack and the discipline of keeping your highest cognitive functions out of the machine's reach.]]></description><link>https://www.shapingminds.co/p/the-post-prompt-professional</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-post-prompt-professional</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 03 Feb 2026 23:01:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!975s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!975s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!975s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!975s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!975s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!975s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!975s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:495642,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/185387694?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!975s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!975s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!975s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!975s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f72d72e-1fe8-46f0-8e0b-565c48566af1_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Flash back to early 2024: we are told that the prompt engineer would be the king of the new economy. We are told that learning the right magic spells to whisper into the ear of an LLM would be the definitive skill of the decade. The narrative was simple: the more natural your language, the more power you would wield over the machine.</p><p>It&#8217;s 2026 and we know the truth: <strong>prompting is a commodity.</strong> If your value is tied to how well you can instruct a model, you have a shelf life of exactly six months&#8212;the time it takes for the next model iteration to make your advanced prompt a default setting. I firmly believe we have entered the era of the <strong>post-prompt professional.</strong> This is the individual who realises that the human premium isn&#8217;t about how well you talk to the machine, but how much of yourself you keep <em>out</em> of its reach.</p><div><hr></div><h3>The competency trap: the gravity of the median</h3><p>The greatest risk of the AI era is, surprisingly, well known. It isn&#8217;t that the machines will become smarter than us; it&#8217;s that we will become &#8220;averager&#8221; because of them.</p><p>A large language model is a statistical engine. It is trained to find the highest probability next word, the most likely code snippet, the standard marketing strategy. By definition, it aims for the centre of the bell curve. When you rely on an LLM to do the heavy lifting of your thinking, you are clearly participating in a regression to the mean.</p><p>We see this in the shadow experts of 2026: professionals who look brilliant on paper because their AI-generated outputs are flawless, but who crumble the moment a problem requires first principles thinking. They are fluent in the output, but they have forgotten the plumbing.</p><p><strong>It is time for the post-prompt shift:</strong> You must stop asking, &#8220;How can I use AI to do this faster?&#8221; and start asking, &#8220;What is the &#8216;fifth option&#8217; here&#8212;the one the statistical model would never suggest because it&#8217;s too risky, too weird, or too human?&#8221; </p><blockquote><p><strong>If your work doesn&#8217;t contain a spark of the statistically unlikely, you aren&#8217;t a professional; you are a quality control officer for a database.</strong></p></blockquote><div><hr></div><h3>The sovereignty stack: a blueprint for cognitive agency</h3><p>In the rush to automate, we have treated our brains like outdated hardware that needs to be offloaded. But capability is a muscle, not a file. If you stop lifting the weight of logic, your cognitive sovereignty atrophies.</p><p>The post-prompt professional builds a <strong>sovereignty stack.</strong> This is a rigorous, daily framework for deciding which parts of the intellect are delegated and which are guarded with religious fervour.</p><ul><li><p><strong>The utility layer (total delegation):</strong> these are the cognitive chores&#8212;scheduling, initial data cleaning, formatting, and high-level synthesis of known information. Automate this to zero.</p></li><li><p><strong>The collaborative layer (active friction):</strong> this is where you use AI as a rubber duck. You don&#8217;t ask it for the answer; you ask it to find the flaws in <em>your</em> answer. You use it to play devil&#8217;s advocate. The goal here is not speed, but <strong>stress-testing.</strong></p></li><li><p><strong>The sovereign layer (the human moat):</strong> this layer consists of three things: <strong>taste, risk, and accountability.</strong> </p><ul><li><p><em>Taste</em> is the ability to know what is &#8220;good&#8221; when the data says everything is &#8220;optimal.&#8221;</p></li><li><p><em>Risk</em> is the willingness to make a move that the AI cannot justify with a graph.</p></li><li><p><em>Accountability</em> is the biological tax we discussed: being the person whose neck is on the line when the &#8220;optimal&#8221; path fails.</p></li></ul></li></ul><p>If your sovereign layer is empty, you are merely a glorified curator. </p><blockquote><p><strong>The human premium lives in the parts of the stack that cannot be distilled into a prompt.</strong></p></blockquote><div><hr></div><h3>From &#8220;user&#8221; to &#8220;architect of agency&#8221;</h3><p>The difference between a &#8220;user&#8221; and an &#8220;architect&#8221; is the direction of influence. A user adapts to the tool; the architect makes the tool adapt to the vision.</p><p>In the early 2020s, we were users. We followed the best practices of the software. In 2026, the post-prompt professional architects agency. This means building systems&#8212;mental, digital, and social&#8212;where AI handles the noise so that the human can focus entirely on the signal.</p><p>Architecting agency requires you to be an <strong>expert generalist.</strong> You must understand the plumbing of your industry, from the technical infrastructure to the psychological triggers of your clients, better than the AI does. You use the machine to amplify your deep expertise, not to mask the lack of it.</p><p>The goal is to reach a state of what I call <strong>frictionless agency</strong>, where the machine handles the execution of your taste at the speed of thought. But for that to work, you must <em>have</em> taste. And taste is built in the architecture of silence, in the curation trap we avoided, and in the struggle we refused to automate.</p><div><hr></div><h3>Reclaiming the driver&#8217;s seat</h3><p>This series has been a journey through the human premium in a world that wants to turn you into a prompt. We have covered:</p><ol><li><p><strong>The cost of certainty:</strong> why being &#8220;right&#8221; is a commodity, but being &#8220;curious&#8221; is a luxury.</p></li><li><p><strong>The curation trap:</strong> why selecting from a menu is not the same as thinking.</p></li><li><p><strong>The architecture of silence:</strong> reclaiming the space where original ideas are born.</p></li><li><p><strong>Algorithmic empathy:</strong> why polite nihilism is the enemy of leadership.</p></li><li><p><strong>The post-prompt professional:</strong> your final form.</p></li></ol><p>The human premium is not a destination; it is a discipline. It is the refusal to let the tool become the ceiling of your potential.</p><p>Your value in this new economy is no longer measured by your output. It is measured by your <strong>consequences.</strong> Anyone can generate a thousand words of optimal advice. Only a human can live with the result of following it.</p><p>Put the prompt in its place. Take your seat at the head of the table. </p><h3>The era of the human has only just begun, if you&#8217;re brave enough to stay in the room.</h3>]]></content:encoded></item><item><title><![CDATA[Algorithmic Empathy.]]></title><description><![CDATA[Exploring why algorithmic empathy is creating a generation of ghostwritten leaders &#8212; and what it costs.]]></description><link>https://www.shapingminds.co/p/algorithmic-empathy</link><guid isPermaLink="false">https://www.shapingminds.co/p/algorithmic-empathy</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 27 Jan 2026 23:00:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pXxJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pXxJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pXxJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pXxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:447164,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/184850184?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pXxJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!pXxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e3d361e-7249-45f8-b39e-23b59c1d2822_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In 2026, the most dangerous thing a leader can do is be perfectly articulate.</p><p>We have entered the era of <strong>empathy-as-a-service.</strong> With a single prompt, an LLM can draft a performance review that is firm but supportive, a layoff notice that is deeply regretful, or a celebratory note that is vibrant and inclusive. On the surface, the output is flawless. The cadence is professional. Right words, right places. Right timing.</p><p>But there is a hollow ring to it. We are witnessing the birth of <strong>polite nihilism</strong>: a workplace culture where everything sounds empathetic, but nobody believes a word of it. We are optimising for the appearance of care while systematically removing the human cost of caring.</p><div><hr></div><h3>Simulated sentiment vs. biological stakes</h3><p>The fundamental flaw of algorithmic empathy is the belief that empathy is a linguistic achievement. It isn&#8217;t. </p><blockquote><h3><strong>Empathy is a biological tax.</strong></h3></blockquote><p>In the pre-AI era, delivering hard news or providing deep support required <strong>intentional friction.</strong> Your heart rate rose. Your voice might have wavered. You had to sit in the physical discomfort of another person&#8217;s reaction. That &#8220;stink of humanity&#8221; was the proof of the message&#8217;s validity.</p><p>The machine can simulate the sentiment, but it cannot feel the stakes. When you use an LLM to soften the blow, you aren&#8217;t being efficient. You are signalling that the relationship isn&#8217;t worth the emotional labor of the struggle. A perfectly drafted AI apology is worth less than a messy, stuttered human one because the human version carries the cost of <strong>presence.</strong> In a world of infinite, free text, the only thing that retains value is the thing that was hard to produce.</p><div><hr></div><h3>The rise of the ghostwritten leader</h3><p>We are seeing a new archetype in the boardroom: the ghostwritten leader. These are managers who use AI as a high-tech buffer. They use it to &#8220;find the right words&#8221; for every sensitive Slack message, every difficult feedback loop, and every &#8220;vulnerable&#8221; LinkedIn post.</p><blockquote><p>The irony is that by seeking the right words, ghostwritten leaders lose the true words.</p></blockquote><p>Your team doesn&#8217;t actually want a 140-billion-parameter model&#8217;s version of support; they want <em>yours</em>. They want your specific idioms, your slightly awkward phrasing, and your genuine perspective. When you hide behind an algorithm, you aren&#8217;t leading; you are narrating a script. Trust is not a result of &#8220;optimal communication.&#8221; Trust is a byproduct of shared risk. If there is no risk in your words, if they were generated by a risk-free probability engine, there is no basis for trust.</p><div><hr></div><h3>The mirror test: curation is not connection</h3><p>The &#8220;Curation Trap&#8221; we discussed previously has now moved into our relationships. We treat our interactions like an executive glance; we look at three versions of a sympathetic response generated by the AI, pick the one that feels least offensive, and hit send.</p><p>This is the <strong>mirror test of 2026</strong>: if you cannot defend the sentiment of a message without looking at the prompt that generated it, you have abdicated your leadership.</p><p>When we curate empathy, we treat people as variables to be managed rather than souls to be led. We become shadow experts of emotion: fluent in the output of kindness, but unable to explain the internal plumbing of our own convictions. We are losing the muscle memory of direct, unmediated human connection.</p><div><hr></div><h3>Reclaiming the human premium</h3><p>To maintain your sovereignty as a leader, you must intentionally reintroduce the friction of the un-prompted life. This doesn&#8217;t mean abandoning tools; it means knowing where the tool ends and the person begins.</p><ul><li><p><strong>The &#8220;raw first&#8221; rule:</strong> for any communication involving emotion, stakes, or conflict, the first draft must be written in a vacuum. No &#8220;make this sound more professional&#8221; prompts.</p></li><li><p><strong>The medium is the message:</strong> in 2026, the handwritten note and the face-to-face (or voice-to-voice) call are the only high-trust channels left. If the text is &#8220;too perfect,&#8221; the brain ignores it as noise.</p></li><li><p><strong>Own the awkwardness:</strong> if a conversation feels difficult, let it be difficult. The human premium belongs to the leader who is willing to be imperfect in person rather than perfect in a prompt.</p></li></ul><div><hr></div><h3>The soul in the machine</h3><p>The machine will always be more polite than you. It will never lose its temper, it will never miss a social cue, and it will never be tired. But it will also never care. It cannot stay up at night wondering if it treated a teammate fairly. It cannot feel the weight of a decision.</p><blockquote><p><strong>Leadership is not a content game. It is a presence game.</strong></p></blockquote><p>Stop trying to be the most articulate person in the room. Start trying to be the most present. In an age of algorithmic empathy, the most radical act of leadership is to put the tool down and speak for yourself.</p>]]></content:encoded></item><item><title><![CDATA[The Architecture Of Silence.]]></title><description><![CDATA[Exploring why the most valuable strategic asset in 2026 isn't a faster prompt, but the ability to sit in a room and think for yourself.]]></description><link>https://www.shapingminds.co/p/the-architecture-of-silence</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-architecture-of-silence</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 20 Jan 2026 23:30:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!2j4t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!2j4t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!2j4t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!2j4t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:624269,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/184275162?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!2j4t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!2j4t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22598aba-37c9-4757-9770-3cb70b672940_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are currently suffering from a new kind of claustrophobia. It isn&#8217;t a lack of physical space, but a lack of mental room.</p><p>In the pre-AI era, thinking was often a lonely endeavour. You sat with a problem, paced the room, and waited for the fragments of an idea to fuse together. There was a specific, heavy silence to that process. Today, that silence has been replaced by a chatter.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>As we have integrated generative tools into our daily workflow, we have turned thinking into a dialogue. We don&#8217;t sit with a problem anymore; we ping it. We don&#8217;t reflect; we prompt. We have replaced the vacuum of the blank page with a digital interlocutor that always has something to say.</p><p>But in our rush to be &#8220;always on&#8221; and &#8220;always prompting,&#8221; we are destroying the very infrastructure required for original thought: the architecture of silence.</p><div><hr></div><h3><strong>The end of the boredom premium</strong></h3><p>Originality is rarely the result of a linear conversation. It is the result of cognitive fermentation. It happens in the &#8220;dead space&#8221;&#8212;the shower, the commute, the five minutes spent staring out a window because you&#8217;re stuck.</p><p>This used to be called boredom. In 2026, it should be called a competitive edge. It sounds counterintuitive, and yet it is logical.</p><blockquote><p><strong>When you are in a constant loop of prompt-and-response, you are never truly alone with your own biases, your own intuition, or your own &#8220;gut feel.&#8221;</strong> </p></blockquote><p>You are always being mediated. Always influenced. Always shaped. The algorithm is a third party in your brain, gently nudging you toward the most probable conclusion. For some very heavy users, the chatbot is not even a third party anymore&#8212;it&#8217;s an extension of their mind.</p><p>The most original ideas don&#8217;t live in the training data. As mentioned in previous newsletters, they live in the &#8220;fifth option&#8221;, the one that only emerges when you have exhausted the obvious and are forced to sit in the discomfort of not knowing. If you never allow yourself to be bored, you will never be truly original.</p><div><hr></div><h3><strong>The feedback loop of noise</strong></h3><p>The danger of the AI-mediated workplace (or AI-influenced workplace, at best) is that it creates a feedback loop where noise begets noise. We use AI to summarise a meeting, then use AI to draft a response to the summary, then use AI to critique the response. At no point in this chain has a human mind actually sat in silence with the original intent.</p><blockquote><p><strong>We are becoming response engines. We are so optimised for throughput that we have neglected the input.</strong></p></blockquote><p>In the architecture of silence, we recognise that the quality of our output is directly proportional to the quality of our quiet. If you are always consuming, whether it&#8217;s data or AI suggestions, you are never producing. You are simply rearranging the furniture of the median mind.</p><div><hr></div><h3><strong>Strategic ghosting</strong></h3><p>We have reached a point where access is no longer the luxury. Everyone has access to the world&#8217;s knowledge. Everyone has access to a 140-Billion parameter model.</p><p>Let&#8217;s coin a new term. The new luxury is strategic ghosting: the ability to disconnect from the digital noise and process the raw data of reality without an algorithmic filter.</p><p>Most leaders today are over-probed. They are so busy asking the machine what it thinks that they have forgotten how to sense what <em>they</em> think. This creates a flattening of leadership. <strong>When everyone is using the same tools to summarise the same data, everyone arrives at the same conclusion.</strong></p><p>Strategic ghosting isn&#8217;t about being anti-tech; it&#8217;s about cognitive hygiene<strong>.</strong> It&#8217;s about ensuring that when you do walk into the boardroom, the opinion you hold is yours, not a statistical average of a training set.</p><div><hr></div><h3><strong>The apprenticeship of solitude</strong></h3><p>There is a looming crisis in how we develop talent. My previous newsletter alluded to the fact that expertise is built in the struggle phase. Let me add that it is refined in the solitude phase.</p><p>Junior professionals are now entering a world where they never have to be alone with a difficult task. They can prompt their way out of every mental block. But a mental block is not a wall; it is a weight. Lifting it is what builds the muscle.</p><p>By removing the silence of the struggle, we are removing the possibility of mastery. We are training a generation of super-curators who can manage the noise but cannot navigate the quiet. To build deep expertise, one must apprentice with solitude. </p><blockquote><p><strong>To build deep expertise, you must be able to hold a complex, contradictory thought in your head for an hour without looking for a green checkmark of validation.</strong> </p></blockquote><p>Let me be clear, though: AI models can be powerful assistants, but they will never be a substitute for your own thoughts.</p><div><hr></div><h3><strong>How to build your quiet room</strong></h3><p>To maintain your sovereignty, you must treat silence as a technical requirement, not a lifestyle choice. I am striving to apply these four principles.</p><ul><li><p><strong>The &#8220;no-prompt&#8221; first hour:</strong> I dedicate the first hour of my day to raw observation. No summaries, no Slack, no chatbots. Just the data and my own reaction to it.</p></li><li><p><strong>The vacuum test:</strong> If I have an idea, I try to defend it to myself for ten minutes before asking an AI to stress test it. If I can&#8217;t hold the thought without digital scaffolding, the thought isn&#8217;t mine yet.</p></li><li><p><strong>Forensic reflection:</strong> At the end of a project, I spend 15 minutes in total silence. I don&#8217;t look at the screen. I ask myself: &#8220;What did I actually learn that the tool didn&#8217;t tell me?&#8221;</p></li><li><p><strong>Reclaim the commute:</strong> I stop optimising every spare second, and allow the dead space to return. My subconscious needs the silence to do the heavy lifting I am trying to outsource.</p></li></ul><p>You may want to give a try to some of these principles that could soon become habits to generate better thinking, develop better leadership or more simply put, create better outcomes.</p><div><hr></div><p>In a world of infinite, automated chatter, the loudest person in the room is often the one who says nothing. Not because they have no answers, but because they are the only ones who haven&#8217;t outsourced their internal monologue.</p><p>The human premium belongs to those who can still stand in the vacuum of a blank page and not feel the need to fill it with someone else&#8217;s training data.</p><p>The machine can simulate a conversation, but only you can experience the silence. Don&#8217;t trade your sovereignty for a faster dialogue. </p><h3>Stay in the vacuum. That is where the new ideas are waiting.</h3><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Curation Trap.]]></title><description><![CDATA[Exploring the editor&#8217;s delusion and why the act of selecting is not the act of thinking.]]></description><link>https://www.shapingminds.co/p/the-curation-trap</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-curation-trap</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 13 Jan 2026 23:00:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ar-A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ar-A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ar-A!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ar-A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:696391,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/183749514?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ar-A!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!ar-A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a913489-a933-410c-805b-e2b799bcf601_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We are currently living through the greatest migration of human effort in history. It isn&#8217;t a migration of location, but of <em>function</em>. We are moving, en masse, from the role of <strong>Originators</strong>&#8212;those who wrestle with the blank page, the raw data, and the structural logic of a problem&#8212;to the role of <strong>Curators</strong>. We have become the masters of the &#8220;Executive Glance.&#8221;</p><p>In this new workflow, the heavy lifting is done by a black box, and our job is simply to review. We sit at the head of a digital table, watching options slide past us like a sushi conveyor belt. We tweak a word here, adjust a colour there, and hit &#8220;send.&#8221; We feel like creative directors. We feel like we have reached a new level of strategic leverage where we are finally &#8220;working on the business, not in it.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But there is a haunting silence in this transition. By skipping the struggle of the first draft, we are quietly outsourcing our sovereignty. We are confusing the act of <em>selecting</em> with the act of <em>thinking</em>. And as the gap between the &#8220;Prompt&#8221; and the &#8220;Result&#8221; continues to shrink, so does the depth of our own professional intuition. We are becoming masters of the menu, but we are slowly losing the ability to cook.</p><div><hr></div><h3><strong>The illusion of recognition vs. the weight of cognition</strong></h3><p>The genius of the curation trap is that it feels exactly like work. When you prompt a model and it generates four potential strategies, and you choose &#8220;Option C&#8221; because it feels the most &#8220;on brand,&#8221; you experience a genuine hit of dopamine. You made a decision. You exercised judgement. You feel productive.</p><p>However, there is a fundamental cognitive difference between recognition and cognition. Recognition is a passive process: it&#8217;s a multiple-choice test. You are comparing what you see against a pre-existing pattern in your head. It is low-energy and high-speed. Cognition, on the other hand, is active. It is the architectural work of building a thought from nothing. It requires you to hold conflicting ideas in your mind, to resolve tensions, and to find the &#8220;middle way&#8221; that doesn&#8217;t yet exist.</p><p>When we spend 90% of our day in recognition mode, our cognitive capacity begins to narrow. We stop looking for the fifth option, the one the algorithm couldn&#8217;t see because it wasn&#8217;t in the training data. We become trapped within the boundaries of the median. If you are only ever choosing from what is presented to you, you are no longer the pilot of your career; you are the passenger who thinks they are driving because they get to pick the playlist. This is how &#8220;average&#8221; becomes the new ceiling for excellence.</p><div><hr></div><h3><strong>The atrophy of first principles: why we are accruing &#8220;cognitive debt&#8221;</strong></h3><p>The &#8220;struggle phase&#8221; of any project (the messy research, the three failed attempts, the circular logic that keeps you up at 2:00am, just to name a few) is often framed as an inefficiency to be optimised away. This is a catastrophic misunderstanding of how expertise is built. That struggle is not waste; it is the exact moment the knowledge moves from the screen and into your bones. It is where first principles are forged.</p><p>When we outsource the &#8220;building&#8221; and only keep the &#8220;editing,&#8221; we accrue what I call <strong>cognitive debt.</strong> Just like technical debt in software, we are taking a shortcut today that we will have to pay for with interest tomorrow. The interest is the loss of our &#8220;B/S Detector.&#8221; If you haven&#8217;t done the math yourself, you won&#8217;t know when the AI&#8217;s logic is 5% off-centre. You might catch the typos, but you won&#8217;t catch the structural rot.</p><p>We are becoming shadow experts. We can talk fluently about the output, but we can no longer explain the plumbing. This creates a fragile leadership layer: people who can curate a brilliant slide deck but lack the deep, intuitive understanding required to pivot when the underlying assumptions of their industry change. In the age of AI, the ultimate competitive advantage isn&#8217;t being a faster editor; it&#8217;s being the person who actually knows how the machine was built in the first place.</p><div><hr></div><h3><strong>Reclaiming the originator&#8217;s edge: paying the &#8220;originator&#8217;s tax&#8221;</strong></h3><p>To survive the curation trap, we must intentionally reintroduce friction into our lives. We have to treat our minds like a muscle that requires resistance training. This isn&#8217;t about being anti-AI; it&#8217;s about ensuring that the tool serves the master, not the other way around. We must pay the <strong>&#8220;Originator&#8217;s Tax&#8221;</strong>&#8212;the deliberate choice to do things the hard way first to ensure our judgement remains calibrated.</p><ul><li><p><strong>The &#8220;draft zero&#8221; rule (protection of the core):</strong> Never open an AI tool until you have produced &#8220;draft zero&#8221; manually. This isn&#8217;t a polished draft; it is a bulleted, ugly, raw mess of your own associations, biases, and structural ideas. By defining the soul of the idea before the algorithm offers you a better version, you anchor the project in your own unique perspective. If you don&#8217;t start with your own bias, you will inevitably end with the algorithm&#8217;s average. You must own the architecture before the AI handles the paint.</p></li><li><p><strong>The &#8220;active deconstruction&#8221; protocol (auditing the logic):</strong> When you do use a generative tool to assist your work, forbid yourself from simply tweaking adjectives. Instead, perform a forensic audit of the output. Ask the tool, &#8220;Why did you suggest this specific hierarchy of information?&#8221; or &#8220;What data are you prioritising in this conclusion?&#8221; Then, try to disprove it. If you cannot defend the structural logic of the work as if you had built it yourself, you are not curating; you are just a relay station for an algorithm.</p></li><li><p><strong>The &#8220;first principles&#8221; deep-dive (calibrating the detector):</strong> Dedicate 20% of your week to manual cognition. Pick a core aspect of your role&#8212;something you&#8217;ve been outsourcing to tools&#8212;and do it entirely by hand. Read the raw 50-page PDF instead of the summary. Sketch the wireframe on paper instead of using a template. This is calibration time. It ensures that when you do go back to being a curator, your eye is sharp enough to spot the uncanny valley of logic that others miss.</p><div><hr></div></li></ul><h3><strong>The last mile of responsibility</strong></h3><p>In a world of infinite, automated curation, the Human Premium will not be found in how well you use a tool, but in how much of yourself you refuse to outsource. We are approaching a point where the correct answer is free and instant. As we have seen before, when certainty becomes a commodity, the value shifts to the quality of the question, but also the weight of the responsibility.</p><p>The most valuable people in 2026 won&#8217;t be the ones who can &#8220;prompt&#8221; the most efficiently; they will be the ones who still know how to build a thought from the ground up when the lights go out. Curation is a skill, but cognition is sovereignty. The curation trap is comfortable because it removes the pain of thinking, but that pain is exactly where your value lives. Don&#8217;t trade your sovereignty for a faster workflow. Stay in the struggle; it&#8217;s the only place where original ideas are born.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Cost Of Certainty.]]></title><description><![CDATA[Exploring how the automation of "The Answer" is eroding our capacity for critical judgement, and why reclaiming the friction of uncertainty is the new human competitive advantage.]]></description><link>https://www.shapingminds.co/p/the-cost-of-certainty</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-cost-of-certainty</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 06 Jan 2026 23:00:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Oywb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Oywb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Oywb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Oywb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:220335,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/183405138?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Oywb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Oywb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73eb7cd7-2ab7-4cf8-805a-0d14f7eb4070_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>We have become addicted to the &#8220;green checkmark.&#8221;</strong> </p><p>In the pre-AI era, a difficult decision felt like a physical weight. There was a period of &#8220;productive discomfort&#8221;, that stretching of the mental muscles as you weighed competing realities, sensed the nuance of a client&#8217;s hesitation, or identified a pattern that didn&#8217;t quite fit the quarterly report. You sat in the &#8220;grey space&#8221; of not knowing.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Now, we have &#8220;The Answer.&#8221;</strong></p><p>Generative AI has collapsed the distance between a question and a conclusion. It offers us a gift that our evolutionary biology, programmed for energy conservation, finds irresistible: <strong>certainty.</strong> But in the modern workplace, certainty is becoming a commodity. And like all commodities, its value is plummeting.</p><div><hr></div><h3><strong>The Feedback Loop of &#8220;Right-ish&#8221;</strong></h3><p>The danger of the current AI-mediated workplace is not that the machines are wrong. It is that they are &#8220;right enough&#8221; to be dangerous.</p><p>When you prompt a model for a strategy, it doesn&#8217;t give you the <em>best</em> strategy; it gives you the <em>median</em> strategy. It provides the statistical average of everything that has been done before. It is a mirror reflecting the &#8220;common sense&#8221; of the internet.</p><p>When we accept that output without friction, we aren&#8217;t just saving time. We are participating in a &#8220;brain heist&#8221; of our own making. We are trading our <strong>insight velocity</strong>&#8212;the speed at which we generate original thought&#8212;for <strong>output volume</strong>. We are becoming incredibly efficient at producing the unremarkable.</p><div><hr></div><h3><strong>The Architecture of the Gray Space</strong></h3><p>We used to value &#8220;expertise,&#8221; which was a combination of knowledge and the experience of having been wrong. Today, we are replacing expertise with &#8220;verification.&#8221;</p><p>We no longer build arguments; we audit them. We don&#8217;t write; we edit. This shift is fundamental. When you build an argument from scratch, you see the structural weaknesses. You know where the load-bearing ideas are. </p><blockquote><p><strong>When you simply &#8220;verify&#8221; an AI&#8217;s output, you are looking at the surface polish, not the foundation.</strong></p></blockquote><p>If the foundation is a statistical hallucination or a generic platitude, you won&#8217;t notice until the project starts to lean. By then, your own ability to fix it has atrophied because you skipped the &#8220;struggle phase&#8221; of the work.</p><div><hr></div><h3><strong>The Luxury of Doubt</strong></h3><p>I have spent a considerable amount of time in the consulting industry to know how valuable doubt can be, both to consultants and clients. If greed is good, doubt is better. Its long-term ROI certainly is.</p><p>If we want to build a &#8220;Humans, Inc.&#8221; mindset, we must reintroduce doubt into our operating systems. At all costs.</p><p>The most valuable people in your organisation won&#8217;t be the ones who can prompt the fastest. They will be the ones who can look at a perfectly formatted, AI-generated proposal and ask: <em>&#8220;What perspective is absent here?&#8221;</em> In a world of instant answers, the premium moves to the <strong>quality of the question.</strong> We need to start valuing &#8220;intentional friction.&#8221; This means:</p><ol><li><p><strong>The 20-Minute Rule:</strong> sitting with a problem for twenty minutes before involving an LLM. Forcing the brain to produce its own (perhaps messy) first draft.</p></li><li><p><strong>The counter-prompt:</strong> actively asking the AI to argue <em>against</em> your favourite idea, not just to validate it.</p></li><li><p><strong>The inefficiency premium:</strong> choosing the &#8220;inefficiency&#8221; of a face-to-face debate over the &#8220;efficiency&#8221; of an AI-generated Slack summary.</p></li></ol><blockquote><div><hr></div></blockquote><h3><strong>The Participation Requirement</strong></h3><p>Human intelligence does not need a &#8220;save the whales&#8221; campaign. It doesn&#8217;t need protection or subsidies. It needs <strong>participation.</strong></p><p>As we head further into this era of effortless intelligence, remember that your competitive advantage is not your knowledge&#8212;which is being commoditised by the second&#8212;but your <strong>judgement.</strong> Judgement is a muscle. Like any muscle, it requires resistance to grow. If you outsource the resistance, the muscle atrophies.</p><p>The &#8220;heavy lifting&#8221; of thinking isn&#8217;t a tax on your productivity; it is the only thing that makes your thinking worth paying for. If a machine can provide the certainty for free, then the only thing left for you to provide is the courage to be uncertain, to explore the edges, and to find the &#8220;wrong&#8221; answer that eventually leads to the breakthrough.</p><p>The future belongs to the originators who are brave enough to sit in the grey space.</p><p>When certainty is automated, the &#8220;Green Checkmark&#8221; is the end of thought, not the goal. </p><blockquote><p><strong>Reclaim your right to be unsure. It&#8217;s where the value lives.</strong></p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Merit Was Never The Point.]]></title><description><![CDATA[Exploring why merit has always been secondary to visibility, narratives, and power, and why AI is making that truth harder to ignore.]]></description><link>https://www.shapingminds.co/p/merit-was-never-the-point</link><guid isPermaLink="false">https://www.shapingminds.co/p/merit-was-never-the-point</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 30 Dec 2025 23:30:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tkwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tkwk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tkwk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tkwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:282389,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/181876745?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tkwk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!tkwk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4b607032-00ef-4c0b-aabb-ffe22989b3c5_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Merit. A beautiful word encapsulating the notions of grind, discipline, struggle, effort and resilience all at once. An iconic word. One of the first ones that come to mind when you want to praise others. The first metric we use to measure ourselves as we set our goals for the year ahead.</p><p>Merit in the workplace? A myth. A mirage. A trick. The kind of illusion that deludes you, particularly at the beginning of your career: &#8220;if I work hard, if I deliver above expectations, I will give myself chances to climb up the ladder&#8221;, your younger self may have told you. The ensuing maze of business imperatives likely disappointed you.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I don&#8217;t know if corporate meritocracies truly exist, but the rise of AI is exposing the truth about merit.</p><p>Let&#8217;s explore.</p><div><hr></div><h3><strong>Merit was always mediated</strong></h3><p>The idea that work is rewarded purely on merit is a fallacy that survived decades of counterexamples.</p><p>Visibility, timing, and proximity to power have always shaped outcomes. They have always determined promotions. In a human workplace, merit is mediated by humans. And the results of this mediation are predictable: they are messy, contestable, unfair sometimes. Any attempt at creating some correlation between your impact and your corporate elevation is vain: there is none.</p><p>With AI, the mediation does not go away. Worse: it scales and it becomes opaque.</p><p>You may not have liked the messiness or ambiguity of human decisions. You may not like AI-based decisions either. Algorithmic mediation is silent, statistical and harder to challenge. It filters out all the little things you do to make work happen: the extra hours to polish that presentation, the countless attempts to book this coffee chat with a decision maker, the diplomacy required to handle a rather rude email response from a naysayer. Algorithmic mediation hides the grit. It hides merit.</p><div><hr></div><h3><strong>Automation hardens the illusion</strong></h3><p>Let&#8217;s be fair to AI: it does not kill meritocracy. It is just making the myth more convincing.</p><p>Decisions feel objective because they are automated. Scores feel neutral because they are numerical. Rankings feel fair because they are consistent.</p><p>But before we hand over the keys to HR algorithms in 2026, let&#8217;s pause and reflect:</p><ul><li><p><strong>Consistency is not justice.</strong> Being wrong 100% of the time is consistent, but it isn&#8217;t fair.</p></li><li><p><strong>Optimisation is not understanding.</strong> An algorithm can optimise for speed or clicks, but it cannot understand intent or nuance.</p></li><li><p><strong>Prediction is not potential.</strong> Algorithms look backward at data to predict the future. They cannot measure your capacity to grow, pivot, or surprise.</p></li></ul><p>The AI-powered corporate system rewards patterns that are easy to recognise. It does not value genuinely valuable, and it certainly struggles with edge cases.</p><p>For example, you may have lost a business opportunity due to budget cuts. The algorithm logs this as a &#8220;Loss.&#8221; But your ability to earn the client&#8217;s trust, your willingness to actively listen to their pain points, and your determination to articulate a customised value proposition may have secured them for the <em>next</em> opportunity.</p><p>You don&#8217;t know it yet, but you have secured future revenue.</p><p>Sadly, the invisible dedication you showed is ignored by the code. The system does not praise future earnings; it only praises past patterns. The messiness of human effort is being flattened into a data point, magnified by a model.</p><div><hr></div><h3><strong>Legibility, the new advantage</strong></h3><p>If the system rewards patterns, you might wonder what&#8217;s needed to achieve success in the workplace of the future without losing your soul to a machine.</p><p>I have already written a lot about cognitive agency, and that will become a fundamental success driver at work. An interesting extension of this agency is found in legibility &#8211; the ability to make one&#8217;s contributions legible to machines. As a matter of fact, if you control your own thinking, you will retain control over your interactions with AI, subjecting the chatbot to your requests rather than the other way around.</p><p>Legibility is best supported by the idea of friction in man-machine interactions. In fact, for demanding tasks, slowness with AI boosts effectiveness. It increases accuracy. It elevates clarity. It is counterintuitive to many, and yet, it is a recipe only a few apply to great success.</p><p>There was a digital divide. It will make way for a clear cognitive divide between:</p><ul><li><p>Those who adapt their expression</p></li><li><p>Those who refuse to flatten themselves</p></li></ul><p>At this stage of model development, it is hard to conceive of a chatbot that could deliver original insights. The dots that LLMs connect through their responses are, somehow, already connected.</p><p>While that lasts, it means originality remains a profoundly human trait. And it is an opportunity: through intelligent prompting, this presumably messy and unstructured originality can be clarified and magnified by a model.</p><div><hr></div><p>This leaves us with a stark realisation: trying to make your work &#8220;readable&#8221; to these systems often means stripping away the very nuance that makes it valuable. The machine rewards standardisation, but your career is built on differentiation.</p><p>Don&#8217;t fall into the trap of becoming a dataset just to be seen. The &#8220;inefficiency&#8221; of building trust and relationships is not a bug to be optimised away; it is the only competitive advantage that remains.</p><blockquote><p><strong>So as we head into 2026, stop optimising for the algorithm. It won&#8217;t love you back. Optimise for the humans who can still see the invisible.</strong></p></blockquote><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Algorithmic Glass Ceiling.]]></title><description><![CDATA[Exploring why AI is becoming the new corporate gatekeeper, and what humans must do to keep originality alive.]]></description><link>https://www.shapingminds.co/p/the-algorithmic-glass-ceiling</link><guid isPermaLink="false">https://www.shapingminds.co/p/the-algorithmic-glass-ceiling</guid><dc:creator><![CDATA[Maxime Mouton]]></dc:creator><pubDate>Tue, 23 Dec 2025 23:01:18 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9SvH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9SvH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9SvH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9SvH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:236437,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.shapingminds.co/i/181110898?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9SvH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!9SvH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc7a7f2ac-f053-43d6-9571-4b07a3dc1f38_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You want to give your best for this interview. You did your due diligence. You prepared your pitch. You even anticipated some curve balls.</p><p>You seek to give your best at work. Almost every day. You are relentlessly chasing the best outcome for your company, your team, your boss, yourself, your self.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Your behaviour? Exemplary, according to this former manager.</p><p>Your commitment? Exemplary, for this former teammate.</p><p>Your seriousness? Exemplary, judging by what your former client says.</p><p>And yet. You have been let go. Either formally, under the pretext that &#8220;you are not a good fit&#8221; or informally because you are forever stuck in your current role.</p><p>With AI making quick strides into the workplace, things may just get worse. A lot worse.</p><p>Is that unavoidable?<br>Let&#8217;s dive in.</p><div><hr></div><h3><strong>The invisible threshold: when algorithms become gatekeepers</strong></h3><p>To make progress within your company -or simply to be recognised by your colleagues- you have spent a lot of time understanding its corporate mechanics. You have adjusted your attitude, your tone, your body language. Still, you have been hired for a reason. Supposedly, you have been hired because you&#8217;re&#8230;you. So, you are managing a delicate balancing act, between these adjustments and the need to stay true to yourself.</p><p>You are naturally inclined to please your manager by delivering reliable, consistent, regular updates and solutions.</p><p>This is where you are wrong.</p><p>Your work is increasingly being filtered, scored, prioritised and surfaced by algorithms. Sometimes in very unnoticeable ways, like this email summary popping into your manager&#8217;s inbox.</p><p>Implications are running deep. </p><blockquote><p><strong>In the everything-now era where time is, more than ever, money, summary equates substance.</strong> </p></blockquote><p>Human-generated gossips -already driving promotions, stagnations and demotions in some workplaces- are now being replaced by their digital equivalent: AI-generated noise. In both cases, the signal is distorted, hidden and annihilated.</p><p>In other words, AI is the new intermediary between you and your leadership. What we knew at the hiring stage is now true at the collaborating stage as well: an AI platform suggests, or even decides, who gets to stand out.</p><p>The result?</p><ul><li><p>An invisible threshold: if your work does not align with algorithmic preferences, it does not get seen.</p></li><li><p>A new segregation system: a brilliant employee whose unconventional writing gets penalised by AI summarisers, while average, AI-polished submissions rise to the top.</p></li></ul><p><strong>Your talent now must have profound algorithmic taste to still be called talent.</strong></p><div><hr></div><h3><strong>The performance mirage: when fluency with AI outweighs originality</strong></h3><p>At this pace, we are entering a world where performance is not measured, but rendered. In this new environment, AI-fluent workers stand to gain a disproportionate performance advantage.</p><p>They may not think better. They may not have deep insights. But their output looks better to the systems evaluating them. They have an edge in playing the algorithmic game well. And because these systems optimise for consistency over insight, they quietly train entire teams to value what looks right over what is right.</p><p>Is that elevating company performance?<br>The answer is a resounding no. The algorithmic bias pushes back to the mean. The median pattern. Dullness. Mediocrity. Extrapolate this phenomenon across the organisation and you have a company where risk and originality are discouraged. Because only &#8220;the norm&#8221; gets algorithmically rewarded.</p><p>The company&#8217;s creativity EKG suddenly gets flat. And guess what. In many industries, this has material business impact. Take innovation, for instance. It is required everywhere, regardless of the nature of the sector, to generate product improvements or operational gains. Deliver less, and someone else will take your spot in the value chain. Deliver slower, and someone else will swap position with you in a jiffy. In that kind of competitive pressure cooker, a workforce trained to avoid algorithmic deviance becomes a strategic liability. These are the rules of trade, they are not always fair, but at least everyone knows them.</p><p>Reversing this line of reasoning, this means that inventiveness, creativity and originality -or if even friction, one of their common denominator- still have a corporate future.</p><p>This could help delineate the complex human-AI collaboration model at work.</p><div><hr></div><h3><strong>Breaking the algorithmic ceiling: human traits that bend the system</strong></h3><p>Because one must confess this model is still being shaped.</p><p>In a very humanly fashion when a disruption is introduced, we start by expressing radical views. AI is no exception: today&#8217;s views on the subject are dichotomous. You hate AI, or you love it. But polarisation clouds judgement, and it prevents organisations from asking the only question that matters: where does AI genuinely create value, and where does it quietly dilute it?</p><p>In times where noise easily weeds out signal, it is a challenge for human-centric technologists to define the most advantageous position for AI at work &#8211;a position in which the good use cases are amplified and the bad ones regulated out or excluded.</p><p>Let me give it a try, though. If I were to draw up a sustainable human-AI collaboration model, it would follow a basic yet effective approach on two levels:</p><ul><li><p>At a company level, 3 necessities for leaders:</p><ul><li><p><strong>Acknowledge that AI cannot work without humans</strong>, and that the opposite is not true. The &#8220;people are our greatest asset&#8221; ultra-bland tagline -and its associated variations- needs to be rejuvenated by reintroducing human review in the performance evaluation process.</p></li><li><p><strong>Reward algorithmically invisible work</strong>. Just to name a few, judgement, mentorship, dissent, synthesis are all viable skills that keep companies going.</p></li><li><p><strong>Update their performance systems accordingly to measure thinking, not formatting.</strong> Some organisations are moving away from a rigid number measurement to more qualitative metrics. In a performance review, hard metrics should still be assessed but should not carry a heavier weight than the softer metrics showing how the employee tried to achieve it.</p></li></ul></li><li><p>At an employee level, 3 shifts:</p><ul><li><p><strong>Shape AI tools</strong> by reframing prompts, customising outputs and overriding defaults. This is harder than it looks as it goes beyond simple AI literacy. It is about pausing, analysing and deciding consciously. Not every AI output is worth your attention.</p></li><li><p><strong>Inject non-conformity.</strong> As mentioned earlier, you have been hired because you&#8217;re you. So be you with AI and don&#8217;t let AI speak on your behalf. Your personal insights, your lived experience are irreplaceable assets that can be magnified by the proper use of a chatbot.</p></li><li><p><strong>Build hybrid outputs.</strong> Easier said than done, for sure, because it implies more effort. A half-human, half-AI output will always be more authentic, even with its imperfections. Even with some flaws.</p></li></ul></li></ul><p>Today, the conditions are not met for this approach to take root. The AI journey in corporate organisations is still too immature. There is undeniable excitement from shareholders to see yet another cost-reducing technology being rolled out. There is still less excitement to get the human-AI collaboration model right.</p><div><hr></div><p>This is why the current transition phase is so precarious: organisations are adopting AI faster than they are updating the cultural and structural safeguards that should accompany it.</p><p>If we are not careful, the most dangerous ceiling in the modern workplace won&#8217;t be structural or political.</p><p>It will be invisible, automated, and mathematically justified. When algorithms decide what is &#8220;relevant,&#8221; &#8220;useful,&#8221; or &#8220;high-signal,&#8221; they begin to decide whose thinking deserves to be seen. And once visibility becomes machine-mediated, originality no longer competes on merit. It competes against statistical patterns designed to minimise surprise.</p><p>Human originality becomes collateral damage in a system optimised for efficiency rather than discovery. The only real safeguard is intentional friction: leaders who reward unconventional thought, teams that challenge machine-filtered consensus, and individuals who refuse to delegate their intellectual edge to automation.</p><p>Protecting originality is no longer a romantic ideal; it&#8217;s fast becoming a strategic necessity for both individuals and organisations.</p><h4>The future of work will reward those who refuse to become predictable.</h4><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.shapingminds.co/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Shaping Minds! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>