Share on Facebook

Monday, September 04, 2023

Can AI have consciousness?


One of the intriguing questions in the debate about the implications of artificial intelligence (AI) is whether AI systems can have consciousness. Consciousness is a characteristic that is seen as typical for human beings, although some – if not many – animals have it to a certain extent as well. Self-consciousness is seen as the highest form of consciousness. Probably only human beings and a limited group of animals have it, like chimpanzees and elephants. It can be established only in an indirect way. For instance, if an animal recognizes itself in a mirror, it is assumed that it has self-consciousness. Once we know that a being has consciousness, maybe it is relatively easy to establish whether it has self-consciousness (for example with the mirror test), but what does it mean that a being has consciousness tout court? This depends not only on the facts, namely on the way a being behaves, but also on how we define “consciousness”. As for this, scientists and philosophers disagree. Moreover, once we know how to define consciousness, the next problem is how to know that a being has consciousness. In a sense, human beings and other beings are black boxes: we can study their behaviour and maybe the mechanisms that cause this behaviour, but not directly the feelings and other qualia etc., so the subjective experience, behind the behaviour, and just the possibility to have subjective experience is essential for having consciousness. Therefore, even if we can measure behaviour that is typical for having consciousness, we don’t know for sure whether the being that we study really has consciousness. It is possible that the being concerned is a zombie, as David Chalmers called it: It does show consciousness-related behaviour, but it doesn’t have the consciousness-related subjective experience, like for instance, human beings have.
Therefore, such a zombie is behaviorally indistinguishable from a human being. The problem is then: How else can we distinguish it from a human?
Recently, a group of AI experts has published a report in which they try to answer the question whether AI systems can have consciousness. Now I must say that I do not have read the report but I have read only about it. Nevertheless, I think that I can write some reasonable words about it in this blog. In their report, the experts come to the conclusion that computers can have consciousness, although the present AI systems are not yet developed to that extent that we can call them conscious. Far from that. Nevertheless, the experts think that sooner or later conscious AI systems will exist and they discuss also several possibilities how such systems would be structured. This is very interesting and intriguing, of course, especially because people (so you and I) tend to think that such conscious AI systems are a kind of humanlike beings, like apes are, for instance, and then maybe even yet more developed; even yet more human than apes already are. But even if we do not personify such AI systems and keep seeing them as machines, we do ascribe to them a typical human characteristic, namely consciousness. And we do so in view of the behaviour of the AI system plus the structure of the AI system (the “machine”), so in view of what we know about its software and hardware. However, being consciousness is not only a matter of showing a certain type of behaviour and of having a certain structure (mechanism); it is also a matter of having the related subjective experiences. And how do we know that AI systems do have subjective experiences, when they show the related behaviour? In this respect, David Chalmers distinguished two kinds of philosophical problems: the easy problem and the hard problem. The easy problem is to establish that a being or an AI system behaves like a conscious being. It is done by measuring and explaining its behaviour from its physical structure and then conclude that the behaviour is how a conscious being should behave (or just not). However, how do we know whether a being or AI system really has subjective experience? This is what Chalmers calls the hard problem. As yet, no answer has been given how to solve it. So, even if an AI system behaves like a conscious being and its physical structure might make having consciousness possible, it is still very well possible that it is a zombie.

Some interesting links:
- https://www.volkskrant.nl/nieuws-achtergrond/ai-wetenschappers-en-filosofen-computers-kunnen-bewustzijn-hebben~b2a213b7/
- https://www.nature.com/articles/d41586-023-02684-5

5 comments:

Paul D. Van Pelt said...

Intriguing question. My only thought on this, at present, is mostly equivocation. I think science may concoct something like consciousness in AI constructs. Call it quasi-consciousness, just to have a working term to wrap around. I suspect someone else is already there, on on the way. Would/will this embody the intricacies and enigmae of the human variety? Unlikely, I think. Would it make AI more useful? Possibly.

Paul D. Van Pelt said...

It may well be the Zombie aspect is paramount. I finished reading something today that had been inaccessible, due to internet outage. The Chinese-Canadian researcher interviewed gave a voluminous account of her work, at the intersection of art and science. All, well-articulated in the language of current theory and/or supposition.
Profoundly academic. It impressed a professional friend, with whom I shared it. I wanted to ask a question: so, what does this all obtain? But there was no portal for comment, however. Therefore, no interest in questions.

HbdW said...

Art is such a thing. If a AI system could have consciousnees, could it create than art as well? If it cannot, can we say than that it is conscious? But what is art? What is creativity?
When became humans creative? The oldest pieces of art are, say, 50,000 years old, but weren't human beings creative before that date? Maybe their art has been lost, for example because it consisted of body paint, wood art, etc.

Paul D. Van Pelt said...

So many questions, by the way. If I had answers, I would be an expert---teaching others. I don't think AI can be conscious in a human way. There are probably one hundred experts who would agree. And, at least that many more who would differ. Can AI create art? AI can, likely, create something---a computer program can process words. Some may say: well, that is art, isn't it? So, AI can write a novel. Yeah, sorta. I am a skeptic, by my pragmatic nature. And, yes, centuries of human art have been lost. I doubt there many who disagree with that statement---but, there are surely some. There is little yet that has been shown to be unequivocal.

HbdW said...

Like you I think that AI will not become conscious in the human sense, also because the subjective experience will fail. I have also my doubts whether AI can become really creative.