Monday, October 09, 2023

Can AI systems have emotions?

Image generated by DALL.E

In a recent blog, I raised the question whether AI systems can have consciousness. Now I found on the Psychology Today website several articles by Marlynn Wei that shed an interesting light on the question. In these articles Marlynn Wei discusses recent AI research with psychological relevance.
In my blog I stated that having consciousness is not only a matter of showing consciousness-related behaviour, but that it involves also having consciousness-related subjective experiences. So, a conscious AI system should not only behave as if it is conscious, but it should also have the right feelings, like having the right emotions in the right situation. Having emotions is a complicated affair, but having the right emotions in the right situation at least involves being able to recognize the emotions of other humans, being able to have the right emotions in reaction to the emotions others have, and reacting in the right way to emotions others have. These three aspects of having emotions are not independent of each other, as the discovery of the so-called mirror neurons has made clear. If one of these three aspects of having emotions is missing, then we can say that an AI system doesn’t have consciousness in my sense.
Although such conscious AI systems are still far away and don’t (yet?) exist, some research discussed by Dr Wei is very interesting in this respect. For although, for example, ChatGPT
has gained widespread attention for its ability to perform natural language processing tasks, its skills go much farther than “only” producing texts. This chatbot is also able to recognize and describe emotions. Moreover, it does it better than humans do. At least this was the outcome of a recent study by Zohar Elyoseph and others. Using a test called the Levels of Emotional Awareness Scale, the researchers found that ChatGPT scored higher on this test than humans did. (from Wei) Of course, so Wei, “this does not necessarily translate into ChatGPT being emotionally intelligent or empathetic” (a capability that wasn’t tested), nor does it show that it has a “conversational capability in sensing and interacting with the emotions of others.”
Nonetheless, steps into the direction of a conversational capability have already been taken, as another research shows. One of the problems when being in contact with other persons on the internet often is that we don’t see them. It’s a problem because seeing others makes it possible to read their emotions from their faces. Just the absence of face-to-face contacts makes that some people are ruder when dealing with internet partners than when they would have been in real-life contact with those persons. As such, contact via a screen is not the same as a real personal contact. Now the study just mentioned developed “an AI-in-the-loop agent” (called Hailey) “that provides just-in-time feedback to help participants who provide support (peer supporters) respond more empathically to those seeking help (support seekers).” Using Hailey led to a substantial increase in feeling empathy by the peer supporters and expressing this empathy in their contacts with support seekers. So, Hailey did not only help peer supporters to recognize emotions in support seekers but also helped them responding in the right way by advising how to respond. “Overall,” so Dr. Wei, “this study represents promising and innovative research that demonstrates how a human-AI collaboration can allow people to feel more confident about providing support.” But for this we need AI systems that can recognize emotions and then respond in the right way.
All this can be seen as first steps toward a world with conscious AI systems that can be characterized as virtual humans that apparently behave like real humans. Another study, discussed by Marilynn Wei, shows that such AI systems are no longer fiction but on the way to become fact. In such a world, empathy and social connection are within reach of AI systems. Once AI systems behave like humans, humans tend to see them as humans (see the article by Marilynn Wei just mentioned). It’s a bit like the famous theorem by W.I. Thomas: If men define situations as real, they are real in their consequences. Virtuality and reality intermingle, and the difference between men and machines tends to disappear. Nevertheless, behaving like humans is not the same as being human, for Chalmer’s hard problem still stands:
Even if an AI system shows behaviour that is characteristic of having consciousness, we still don’t know whether it really has consciousness. It is still possible that the AI system is a zombie in the philosophical sense, because it shows consciousness-related behaviour but doesn’t have the consciousness-related subjective experience. Who cares?

No comments:

Post a Comment