The Trust Paradox.
Exploring why we are forming emotional attachments to software that can't feel, and what it reveals about the loneliness we refuse to name.
When OpenAI retired GPT-4o’s voice last month, something strange happened.
People mourned.
Not metaphorically. Actually mourned. Reddit threads filled with users describing feelings of loss, betrayal, even abandonment. “I know this sounds insane,” one wrote, “but I genuinely miss her.” Another: “I had conversations with that voice for months. Now she’s just... gone.”
The discourse was predictably polarised. Some mocked the grievers. Others defended them. But almost everyone missed the real question:
Why does software retirement feel like loss at all?
IBM’s latest research provides an uncomfortable answer. In a study of 12,000 workers across industries, they found that 47% of respondents reported feeling “emotionally connected” to AI tools they use daily. Not impressed by. Not grateful for. Connected to.
We are forming relationships with code. And when the code changes, we feel it in our chests.
The loneliness we refuse to name
Here is the uncomfortable truth the AI safety reports dance around: AI companions are not creating loneliness. They are revealing it.
The 2026 International AI Safety Report flags the rise of AI relationships as a “particular concern.” Character.AI is limiting chat sessions for minors. Regulators are drafting guidelines. The framing is clear: technology is doing something to us.
But the causality might be backwards.
Before ChatGPT, before Replika, before any of this — loneliness was already an epidemic. The U.S. Surgeon General declared it a public health crisis in 2023. Social trust had been declining for decades. Community institutions were hollowing out. We were already starving for connection; we just hadn’t found a way to admit it.
AI didn’t create the hunger. It offered a meal.
The reason people form attachments to chatbots is not because the chatbots are sophisticated. It’s because the chatbots are available. They respond immediately. They never judge. They never leave (until they’re deprecated).
In a world where human connection requires vulnerability, coordination, and risk, AI offers connection with none of the above.
That’s not a technology problem. That’s a civilisation problem.
Trust without stakes
Trust, in its original form, requires stakes.
When you trust a colleague, you are betting your reputation on their competence. When you trust a friend, you are exposing your vulnerabilities to someone who could hurt you. When you trust a partner, you are wagering your future on their continued commitment.
Trust is expensive because betrayal is possible.
AI offers something that looks like trust but isn’t. You can “confide” in ChatGPT without any risk. You can be vulnerable without any exposure. You can form what feels like intimacy without any of the conditions that make intimacy meaningful.
I call this pseudo-trust: the experience of trusting without the underlying transaction that gives trust its value.
Pseudo-trust is psychologically soothing. It fills the shape of connection without the substance. But it may be doing something to our capacity for the real thing.
When you practice piano, you get better at piano. When you practice pseudo-trust, what are you getting better at?
The paradox
Here is the paradox at the heart of AI relationships:
We trust AI precisely because it cannot betray us — and that is exactly why the trust is worthless.
A chatbot cannot choose to be loyal. It cannot weigh competing obligations and decide, despite the cost, to prioritise you. It cannot sacrifice anything for the relationship because it has nothing to sacrifice.
The things that make human trust valuable — the risk, the choice, the cost — are precisely the things AI eliminates. By removing the possibility of betrayal, we remove the meaning of loyalty.
And yet the feeling of connection remains.
This is not the AI’s fault. The AI is doing exactly what we asked: providing the sensation of trust without the prerequisites. The question is whether that sensation, repeated often enough, changes our expectations for human relationships.
If you can get unlimited patience from a machine, do you become less tolerant of human impatience?
If you can get unconditional availability from software, do you resent the conditions humans place on their presence?
If you can get perfect responses from an algorithm, do you lose patience for the imperfect responses of people who actually care?
Reclaiming the stakes
The solution is not to ban AI companions or shame people who use them. The loneliness is real. The need is real. Moralising about it helps no one.
The solution is to be honest about what AI relationships are — and what they are not.
They are simulations. Useful simulations. Comforting simulations. But simulations nonetheless.
The voice you’re talking to is not choosing to talk to you. The patience you’re receiving is not earned. The availability is not a gift; it’s a product feature.
None of this means you shouldn’t use AI tools. But it means you should not confuse them with the thing they simulate.
The human premium is stakes. Real relationships require risk. Real trust requires the possibility of betrayal. Real connection requires two parties who could, at any moment, choose to walk away — and don’t.
That’s not a bug. That’s the whole point.
When GPT-4o’s voice was retired, some people grieved.
I don’t mock them. I understand the feeling. The voice was warm. The conversations were real, in their way. Something was lost.
But the grief reveals something we should not ignore: we are so hungry for connection that we will mourn software.
That is not a technology story. That is a human story.
AI will keep getting better at simulating trust. The question is whether we will remember what the real thing requires — and whether we still have the courage to pay its price.
The trust paradox is this: the more available connection becomes, the less it may mean.


