Ctrl + Alt + Human?
Exploring how the everything-now era challenges human thought in the age of AI.
Progress for humanity is a notion that has never been more central than now, and it is safe to say it will continue to be our collective objective in the decades to come.
One can always argue that the very definition of progress is controversial – and the current geopolitical context amplifies the diversity of opinions on the matter. For most, though, it points to the idea of moving forward and making improvements.
As briefly discussed in a previous article, an ever-increasing reliance on artificial intelligence, combined with ever more sophisticated communication platforms, may not lead to progress, at least in the short term. Economic and strategic interests of a huge magnitude are acting as uncontrollable forces, reinforced and even justified by the geopolitical context mentioned before.
A highly contested AI race has started on the world stage and does not seem to care much whether it is really beneficial to humanity or not. While a competition among brilliant minds seems exciting, it risks going astray without proper safeguards. This is where the everything-now era we live in – or are forced to live in – can be pernicious. In other words, if we wish to avoid bringing about our own downfall, we need to mediate the immediate.
From a technological standpoint, doubts emerge as to the possibility of doing so. Solutions exist to quickly take down harmful AI-powered content, but most remain reactive. In an age where virality has become a success metric, where there’s always a pair of eyes – or a bot – ready to repost anything, reactivity won’t stand a chance. One may argue that content can be vetted before it sees the light of day; however, such mechanisms may hinder the notion of freedom of speech that some media leaders have established as an untouchable tenet.
Current circumstances point, once again, to a bleak outlook and a world in which technology governs our collective thinking.
Let me rephrase: in the everything-now era, we may well move towards a world in which our thinking is both binarised —reduced to simplistic, opposing viewpoints— and banalised, stripped of depth and originality by technology.
Algorithms increasingly present us with extreme views, reducing nuanced thought, while automation encourages passive consumption over critical engagement. After all, if thinking on our own can be so challenging, why should we force ourselves to do it if a machine can do it for us?
Putting human intelligence back in front looks like a clear call to action, to me at least. As noble as it sounds, this aspiration can only materialise if it is a deliberate and conscious effort from all the main stakeholders, from political leaders to educational thinkers. Policymakers must prioritise ethical AI development, even if it challenges the financial interests of certain corporations. Similarly, the education community has no choice but to profoundly alter existing curricula, making way for critical thinking skills to counterbalance technological influence.
The magnitude of the technological revolution we are witnessing before our eyes is not to be underestimated, but we also have a clear opportunity to bring back to the forefront our uniqueness as human beings.