Me, Myself & AI - Part II: Human-Closing-The-Loop.
Exploring how AI’s drive toward clarity may strip away the ambiguity where human trust, nuance, and judgment thrive — and why preserving these spaces matters more than ever.
What happens when AI removes ambiguity — the very thing that makes business and society human?
In part I, I explored how AI shifts decision-making — and relevance — away from us. But this is bigger than tech. It's about what we might lose.
Any business school curriculum worth its salt would tell you: business opportunities thrive in the presence of market ambiguity or information asymmetry. What if Artificial Intelligence eliminates both conditions? Is it unreasonable to think that:
Competitors using the same AI could end up with the same answers?
AI could become the greatest business equaliser ever?
It is only a futuristic scenario but it underscores the need for — and the value of — stronger human connections and judgment.
Imagine this. You are a business leader facing an important investment decision. The AI tool gives identical assessments for all sellers. With no compelling rationale to anchor your choice to, you choose the one you trust more. The person you trust more. The higher human premium.
Picture this. You are a family leader seeking a critical financing decision, based on an AI credit scoring system that has factored in structured and unstructured data — designed to eliminate ambiguity and maximise success for the financial institution.
What if your unique context can’t be turned into data? AI might miss the nuance of a family emergency, an informal agreement, or an act of good faith. Cold, hard data won’t capture the subtleties behind your loan application.
Human interaction captures nuance. A lender could not only review the AI’s output but challenge and enrich it — with human judgment, not just data.
You may be familiar with the “human-in-the-loop” collaboration model between AI and humans. It positions humans as AI output reviewers. It gives us presence. We should aim for “human-closing-the-loop” — a coexistence model where we don’t just review decisions. We own them.
Both examples show the unlock is trust in human judgment — with responsible AI governance as its corollary. Most impactful technologies end up over-trusted — think facial recognition or search engines. AI may be next.
The change is too big to speed through. If we stay superficial, we don’t just risk errors — we risk eroding society’s fabric.
Would you trust AI to make your next major life decision? Drop a comment: when does AI stop helping—and start replacing—what matters most?