As we have seen, there are two entities within me that may determine my actions, my zombie and I. Then we can ask: Who is responsible for these actions? Am I, my brain interpreter, responsible for what I do, or is it my zombie, who steers my actions? Here I do not want to discuss the situation that we do things really unconsciously, for example when sleepwalking or in a black out, but I mean the cases that I am completely aware of what I am doing.
We have to consider several possibilities here. The first one is the case that my chauffeur example is a good analogy for: It is I – my brain interpreter – who determines the main lines of what I am doing and it is my zombie who executes them. This sounds quite Cartesian (like all the cases here), but I think that further insight will show that actually I and my zombie are deeply integrated. Anyhow, I think this case is a variant of the assumption that man has a free will and as such is responsible for what s/he is doing.
A second possibility seems more interesting: That it is not I – my brain interpreter – who determines the main lines of my actions, but that my zombie does. My I is really a brain interpreter and it is my zombie who takes the decisions, which are then explained by my brain interpreter, who puts them into words. But does this mean that I (now in the sense of me as a person) am not responsible for what I am doing? However, it can still be so that my actions and the decisions that ground them are my actions and decisions, albeit that my actions are based on my rational unconscious decisions. My brain interpreter is then like the government speaker who tells the press afterwards what the government has decided, informed about what has been decided by the prime minister, because the government speaker herself wasn’t present at the cabinet meeting. But then the cabinet decisions as such are still rational decisions, although the speaker learned about them only afterwards. So, it can be with my zombie, too: my zombie takes his decisions hidden for the brain interpreter but they are based on relevant facts, experiences, feelings and what more it needs for a rational weighing. Basically, this situation is not really different from the first situation described here: I am still a person with a free will.
But what if my zombie is merely a kind of automaton? A kind of machine or computer that processes input and output according to certain rules programmed by nature and past experiences, without any kind of deliberate rational weighing (whatever that might mean)? And that I, in the sense of my brain interpreter, do not more than giving a kind of description of the output? And that I as a person am not more than a sort of executer of these decisions when acting? Then I am really a zombie although a zombie with the appearance of a conscious being.
That I am in fact not more than a zombie in disguise is quite well possible, of course. But does this make any difference with the free will situation? Should we say then: well, because I am actually a zombie who behaves automatically, I cannot help what I do, and it is the same for my fellow men? We are all automatons and we do not know what we do? Should we have to conclude then that we have to skip the idea of responsibility for what we are doing from our vocabulary? Or are we still responsible in some sense? Maybe it is irrational, but I guess that nobody would accept our not being responsible in some sense.