Saint Severinus abbot by Ermes Dovico
ANTHROPOLOGY

The next challenge for AI is our return to critical thinking

Millions of people are at risk of 'epistemic risk', i.e. the inability to distinguish between what is true and what appears to be true. The habit of asking our 'virtual friends' about everything and for anything makes distinguishing between true knowledge and statistical prediction more necessary than ever.

Life and Bioethics 07_01_2026 Italiano Español

The year 2026 brings with it many expectations and hopes, but also questions. Unfortunately, various conflicts continue to plague the globe, and solutions with as yet undefined features loom on the horizon. All this is taking place against a backdrop of significant challenges that call into question the anthropological structure of human beings. We could touch on several related topics here, but we have chosen to focus our attention on one phenomenon in particular: Artificial Intelligence (AI).

On 8 December, the renowned British magazine of Time dedicated its cover to 'The Architects of AI'. Among them are Elon Musk, Mark Zuckerberg, Jensen Huang (founder of Nvidia, an AI giant), and Sam Altman (CEO of OpenAI and creator of ChatGPT). These individuals are just some of the protagonists of this epochal transition that our age is experiencing, which reached its zenith in the past year.

While 2025 did not see the birth of AI, it did normalise it. It became commonplace for millions of people around the world to turn to 'this imaginary friend' and ask it all sorts of questions. This is where the quantum leap took place. ChatGPT boasted 300 million weekly users in December 2024; by November 2025, this figure had risen to 810 million. The film, art and innovation magazine ONOFF MAG reviewed the hundred strangest questions asked of ChatGPT, covering everything from relationship problems and barroom philosophies to existential 'paranoia' and everyday dilemmas. Depth and mystery, but also trivialities and absurdities, lie at the heart of the average user's requests. At this point, we should ask ourselves what effects this 'different' type of socialisation has on human beings. First, however, we should ask ourselves whether we can talk about an actual socialisation process when the interlocutor is a 'supercomputer' that imitates human intellect.

The problems would not exist if those who use AI on a daily basis took the answers with a pinch of salt. It is well known that the output often contains significant errors, and it is not yet clear how long this will continue to be the case. The dissonance lies precisely at the intersection between the apparent credibility of AI and the inability of 21st century humans to reason.

Ultimately, we are dealing with the consequences of what Martin Heidegger referred to as 'calculating thinking' and what Max Weber defined as 'rational action with respect to the goal'. This form of rationality, which does not question the 'why' and avoids 'meaningful discourse', ends up atrophying the capacity for judgement.

It is in this existential corner, or cognitive black hole, that Walter Quattrociocchi — director of the Centre for Data Science and Complexity for Society at La Sapienza University — defines epistemia”. That is, the 'inability to distinguish between what sounds like knowledge and what is knowledge'. Human verification capabilities are thus challenged in two ways: through a lack of critical thinking habits, and through the astonishing flawlessness of the linguistic models used by AI (Large Language Models LLMs).

'Content may seem true to us, not because it is true, but because its linguistic form reminds us of someone who usually tells the truth. It is a cultural reflex, not a critical act,' observes Quattrociocchi. The pitfalls multiply when we consider that this linguistic flawlessness is compounded by 'sycophancy', i.e. the tendency of models to confirm the interlocutor's beliefs.

In essence, AI takes on the role of Manzoni's lawyer Azzeccagarbugli, who is always ready to distort reality to satisfy those in front of him and reassure them of the legality of their actions.

Unfortunately, this process does not serve to 'know', but rather to 'predict'. 'Knowledge becomes a personalised service, tailored to our point of view. Doubt disappears. Dissent does not arise. Every interaction reinforces the illusion that the world is exactly as we imagine it to be. And that this is knowledge,' recalls Quattrociocchi.

However, it should be clarified that our discussion is not aimed at AI, but rather at human behaviour. The real subject of these reflections is human rationality. The hope is that it will regain the ability to critically evaluate what it observes, hears and reads. This will not be an easy task, of course, so the first step is to unmask easy certainties, specious guarantees and cloying leaps forward. The present time is not solely shaped by imagining the future, but by creating the conditions to 'secure' it right now.