Saint Hilary of Poitiers by Ermes Dovico
TECHNOLOGY

Warning: artificial intelligence will guide our intentions

What if Artificial Intelligence understood our intentions and anticipated us in our choices? This is not science fiction, but a development already underway. And we really run the risk of having it replace us in our ethical choices as well.

Life and Bioethics 13_01_2025
Artificial Intelligence

Dear Reader, But do you know that artificial intelligence (AI) already knew, before you decided, that you would be reading Compass today and probably this article? This is the gist of an evocative article, authored by Cambridge researchers Yaqub Chaudhary and Jonnie Penn, entitled Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models published on Dec. 30.

The two scholars argue that we are transitioning from the attention economy to the intention economy. Regarding the former, it is a known fact that sites, social, chats, etc. record what we look at, see, buy, and send this big data to companies so that with advertisements, suggested items, etc. they can guide our purchases, armed with the knowledge about our tastes that they possess. Now a next baby step is underway: AI will predict our intentions. It is no longer just observing what we observe, but interacting with us to get to know us better and anticipate our moves. And how does the AI interact with us? With personal assistants or digital assistants (smart assistants)-think of the Google assistant or Alexa or Siri-and with chat bots, which are software programmed to talk to us humans. Both systems record an immense amount of information about us: choices, preferences and habits related to lifestyles, consumption, interests, emotional states, where we are, who we meet, what we read, etc. They record them very accurately and for long periods because we talk to them, interact with them constantly and for so many purposes. In short, these personal assistants and chatbots know us better than Facebook.

And let's get to the point: all this knowledge about us will be used by AI to predict our choices and suggest them to us before we make them: from wanting to what we would like to want. The article gives this example where a voice assistant thus interacts with the user, “You said you're feeling overworked, should I book you that movie ticket we talked about?” And why stop at the movie theater? Possible dialogues, invented by us, are also the following: “You said you are fed up with your wife. Have you ever thought of a new life without her? You're still young”; ”You're pregnant, it's your second child, plus you and your partner still have to finish paying the mortgage. Have you ever considered abortion? If you want I can read you some articles on the subject.”

Of course, the suggestion will come not so much from AI, but from the companies or large media or political power groups that have sold us or given us digital assistants which are present in our smartphones or in our homes. So if data about us used to be worth gold, now what is valuable is our intentions. “These companies,” the two researchers add, ”are already selling our attention. To gain a commercial advantage, the next logical step is to use technology, which they are evidently already developing, to predict our intentions and sell our desires before we even fully understand what they are.”

It goes without saying that, as the above examples suggest, the step from “suggestion” to “manipulation” is a very short one. Researchers at the Leverhulme Centre for the Future of Intelligence (LCFI) in Cambridge speak of “persuasive technologies,” to put it mildly. The AI present in these technologies will create relationships of trust and understanding with us, and so we will be persuaded to follow its suggestions. In short: the AI will make decisions for us, even if we do not realize it. From information, to suggestion, to shaping our consciousness and the collective consciousness.

The two scholars in this regard are very clear: “such tools are already being developed to elicit, infer, collect, record, understand, predict, and ultimately manipulate, modulate, and commodify human plans and purposes, whether mundane (e.g., choosing a hotel) or profound (e.g., choosing a political candidate).”

All this is not future, but present. Apple's App Intents developers for connecting apps to Siri (Apple's voice-controlled personal assistant) have included protocols in the app to “predict actions someone might take in the future [and] suggest the intention formulated by the app.”

The spin-offs of this process from predictive to prescriptive are endless. Even in the bioethical field. In January last year, the following article was published in the scientific journal The American Journal of Bioethics : A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.

What to do when a patient is incapacitated? Yes, there are advance declarations of treatment (DATs). But what if they are missing? And, even if they are there, what if they are obscure, ambiguous, deficient? Yes, there is the figure of the trustee. But what if he is also missing or, even if he is there, who is to say that he is reliable in describing the patient's wishes? Similarly if we think of relatives. Here then comes to the rescue AI, which, in the present case, takes the name of Personalized Patient Preference Predictor: the 4P model.

The authors of the article just cited propose “to use machine learning to extract patients' values or preferences from data obtained at the individual level and produced primarily by themselves, where their preferences are likely to be encoded (even if only implicitly).” To simplify and exemplify: you get into an accident and end up in a coma. The doctors ask Alexa what choice you would have made at that juncture. Initially Alexa puts together all your readings and videos on the topic of euthanasia that you may have liked, as well as conversations you have had with her or others always on this topic. Secondly, it compares this data package with your somewhat humble temperament and attitude toward life not always sunny so interpreted because of the movies, readings, interests you have cultivated, emails and posts you have written, pictures of sunsets posted on Instagram, purchases of gothic-creepy clothing on Amazon, some unhappy phrases of a Leopard-like nature you have hurled at Heaven and dictated by a passing despondency. And so, finally, in a billionth of a second you find yourself in a coffin because Alexa decided so. Or rather: who programmed Alexa. And it matters little if you at that juncture could also have decided differently from your previous decisions since “hypothetical situations do not necessarily reflect what people choose in real situations.”

Reading both articles, then we understand that an anthropological involution is taking place: the virtual initially informed us, then helped us, and in the near future will replace us. From information, to help, to replacement. Indeed, the researchers who proposed the 4Ps model claim that AI would become “a kind of ‘digital psychological twin’ of the person.” Our freedom, already heavily plagiarized today in many ways, would be handed over to those who maneuver the AI, and the AI would choose for us whether to go to the movies, whom to marry, and whether to pull the plug. We would give full delegation to the AI because in the collective perception the latter is super-intelligent, neutral in judgments, and objective because it is free of emotional conditioning and self-interest. The result would be fatal: we would no longer be living, but a virtual self of our own.