The
so-called “trolley problem” is more relevant than ever before in these days that
the coronavirus rules the world. Lately yet, in my blog dated 23 March 2020, I
discussed its relation to the corona crisis. Since then, again and again I have
seen discussions on TV that prove the topicality of the problem.
To recapitulate,
there are two versions of the trolley problem. In version 1, a runaway trolley is headed for five people who will be killed if it
proceeds on its present course. However, when you turn a switch, the trolley will
be directed to another track where it will kill a man who is walking there. Will
you turn the switch and save five lives against one person killed? Most people
say yes. In version 2 you are standing on a footbridge and a fat man is
standing next to you. Now you can stop the trolley by pushing the fat man off
the bridge. His body will stop the trolley but the man will be killed. Will you
push the man in order to save five lives? Most people say no. Generally, your options
here are seen as a dilemma: Either you let utilitarian arguments prevail or you
let deontological arguments prevail. Utilitarians reason that promoting the “greater
good” is best. Since five lives saved is better than one life saved, you must
push the fat man. Deontologists argue that certain moral lines ought not be
crossed. They argue from principles. If your principle is “You shall not kill”,
you are not allowed to kill the fat man.
These basic approaches are seen as
alternatives, but recently,
a Russian philosopher friend draw my attention to an article that throws a new
light on the question. The philosophers and neuroscientists Joshua D. Greene et
al. didn’t just want to argue about what the best approach is in trolley-like
cases, but they wanted to see what happens in the brain, when people take
decisions in such cases. (see Source below) I’ll skip the details, but
the essence of what they did and found is this. First, they distinguished between
personal and impersonal moral judgments. Personal moral judgments “are driven largely by social-emotional responses while other moral
judgments, which we call ‘impersonal,’ are driven less by social-emotional
responses and more by ‘cognitive’ processes.” Personal moral judgments concern
the appropriateness of personal moral violations, like personally hurting
another person. They require agency, doing something yourself. Impersonal moral
judgments are then those that are not personal. They require not so much doing
something actively but they are more a matter of interfering, directing or following
(my words). Greene et. al say it this way: “it is ‘editing’ rather than ‘authoring’”,
not agency. An example of a personal moral dilemma is the “footbridge version”
of the trolley problem and an impersonal moral dilemma is the “turning the
switch version”, so the authors. “Footbridge” arouses much emotion when
deciding what to do, while “turning the switch” is a matter of calculation. According
to the authors there is reason to believe that the distinction
personal-impersonal is evolutionary. Impersonal approaches of moral dilemmas came
later in human development than personal approaches.
Next, the authors developed a test in order
to see what happens in the brain when moral decisions are taken. What did they find?
When impersonal moral judgments are taken cognitive parts of the brain are
involved, while in case of personal moral
judgments those parts of the brain are involved where social-emotional responses
take place. Moreover, the authors found that in relevant cases impersonal judgments
tend to prevail over personal judgments.
What does this mean for moral philosophy? I think that I can best extensively quote
from the “Broader Implications” section of the article: “For two centuries,
Western moral philosophy has been defined largely by a tension between two
opposing viewpoints[: Utilitarianism (Bentham, Mill) and deontology (Kant)]. Moral
dilemmas of the sort employed here boil this philosophical tension down to its
essentials and may help us understand its persistence. We [=the authors] propose
that the tension between the utilitarian and deontological perspectives in
moral philosophy reflects a more fundamental tension arising from the structure
of the human brain. The social-emotional responses that we've inherited from
our primate ancestors …, shaped and refined by culture bound experience,
undergird the absolute prohibitions that are central to deontology. In
contrast, the ‘moral calculus’ that defines utilitarianism is made possible by
more recently evolved structures in the frontal lobes that support abstract
thinking and high-level cognitive control. … We emphasize that this cognitive
account of the Kant versus Mill problem in ethics is speculative. Should this
account prove correct, however, it will have the ironic implication that the
Kantian, ‘rationalist’ approach to moral philosophy is, psychologically
speaking, grounded not in principles of pure practical reason, but in a set of
emotional responses that are subsequently rationalized .... Whether this
psychological thesis has any normative implications is a complicated matter
that we leave for treatment elsewhere ....”
If all this
is true, I think that as important is that making moral
judgments is not simply a matter of either-or, in the sense that one follows
either utilitarian rules or deontological principles. Even if one turns the
switch, one can rightly have the feeling that one breaks the rule “you shall
not kill”. And even if one doesn’t push the fat man from the bridge, one can
still wonder whether it hadn’t been better to save the five lives of the people
on the track. Taking decisions and making moral judgments is not simply a
matter of choosing a guiding approach and that’s it. Apparently, utilitarianism
and deontology are not alternatives but options.
Source
Green, Joshua D.; Leigh E. Nystrom; Andrew D.
Engell; John M. Darley; Jonathan D. Cohen, “The Neural Bases of Cognitive
Conflict and Control in Moral Judgment”, in Neuron, 44/2 (October 14,
2004); and on https://www.cell.com/neuron/fulltext/S0896-6273(04)00634-8
1 comment:
I guess you replaced the switch for the fat man to make the act of killing a person clearer, cause you actually need to push someone off a bridge. Well, i think whether or not you turn the switch depends on your personality/moral guidelines, or on whether you chose the utilitarian or the deontological perspective. Me, personally, i would turn the switch. If by taking action i can reduce harm, i will do it. Just standing around while it happens is exactly what people in nazi germany did. They were too scared and it was too late to take action. If we let everyone but ourselves decide, we could just kill ourselves. We wouldn't make a difference, and a passive life means to vegetate, not truly live.
Post a Comment