Share on Facebook

Monday, June 14, 2021

Philosophical intuitions

In my blog two weeks ago I wrote about an investigation by the Lithuanian philosopher Vilius Dranseika. In this investigation Dranseika tested a philosophical intuition on memory. Philosophical intuitions are often subject of investigation in experimental philosophy, but what are actually philosophical intuitions? To make matters short, I want to define them here as immediately justified beliefs. However, philosophical intuitions are not “just” beliefs in the sense that a philosopher who has a certain intuition thinks: It’s what I think, but maybe I am wrong and maybe matters are different. No, a philosopher who has a philosophical intuition thinks that it is true and that every reasonable person will agree that this intuition is true. Seen this way, we can define a philosophical intuition more precisely as an immediately justified true belief. Moreover, intuitions are not only true, but, as said, they are immediately true. Of course, not everybody will an intuition proposed by a philosopher immediately consider true. Then the philosopher will not say: “Maybe I am wrong and maybe the intuition is not as intuitively true as I thought.” No, s/he’ll find reasons to explain why the intuition nevertheless is true, for philosophers are good in confabulating reasons, for it is their job. And in the end she can always “play the man” and say: “Strange that you don’t see it. Everybody sees so.” Actually, it’s the last rescue in case you cannot convince your opponent, for it’s typical for an intuition that you cannot give it a factual foundation. Intuitions are simply true. Are they?
Before I want to discuss this question, I want to distinguish philosophical intuitions yet from psychological intuitions. Rather than being a form of true knowledge, psychological intuitions are a kind of “gut feelings”, and basically they are open to refutation. “Intuitively, I think that this man is a scoundrel” (but maybe he is the most honest man in the world). “Intuitively I think we should go to the left” (but maybe it was the road to the right that let to our destination). Etc.
Now I go to the question whether philosophical intuitions are true. As a first step to undermine the idea they evidently are, I want to discuss an example from psychology: The well-known Müller-Lyer Illusion. Please, click here for a picture of the illusion. Most people believe that the line on the top is shorter than the line under. Nonetheless both lines have the same length. (Measure them if you don’t believe!) Your intuitive belief is contrary to the fact. However, the Müller-Lyer figure is an illusion, so an observation error. It’s not a (false) philosophical intuition. But if such an apparently true observation about the lines in the Müller-Lyer Illusion can be false, why not then the same so for apparently true illusions? And that’s what I want to maintain here: Most philosophical illusions are false or not true to that extent as philosophers thinks. With the latter I mean that their truth is limited to certain contexts, like the context of investigation, culture, and the like (and even then I have my doubts).
There are so many intuitions in philosophy that it’s simply impossible to discuss them all, certainly in a blog like this one. It’s even impossible to discuss a representative fraction of the existing philosophical intuitions. However, they are especially used in thought experiments and therefore, by way of illustration of my critique, I want to discuss a much-used thought experiment in the discussion about personal identity in analytical philosophy: brain swapping. Such thought experiments have the form that the brain of person A is transplanted to the body of person B. Variations of this standard case are that the brains of A and B are switched; that the halves of A’s brain are transplanted into different bodies; that only the information of A’s brain is brought to B’s brain (after that first the information of B’s brain has been removed); or even that persons are copied and are “teletransported” to another place. (see here) Of course, everybody is free to invent what s/he likes, but can this thought experiment be the basis of a serious philosophical discussion? For the idea of brain swapping in one form or another is based on the implicit assumption that brain swapping is possible and that with swapping brains we swap personalities. Weren’t it so, it would have no sense to draw conclusions from such thought experiments about the characteristics of our personal identity. Nonsense will lead only to other nonsense. And that’s what is the case here. I don’t mean that the ideas about personal identity are nonsense, but if they are correct it is not because of these thought experiments. Take for example the idea that we swap personalities if we swap brains. It is founded on the intuitive idea that we are our brains. But if I formulate it this way, many philosophers will say: Of course, we are not our brains, we are more. I guess, that even some of those philosophers who use brain swap thought experiments in their discussions on personal identity will deny that we are our brains. Why then do they use such brain swap arguments in order to substantiate their views? I’ll give one example why your identity is not just in your brain. Simply said, some runners have fast-twitch muscles and others have slow-twitch muscles. Fast-twitch muscles will never make you a good long-distance runner, while slow-twitch muscles will never make you a good sprinter. Isn’t it not so then that the type of muscles a runner has is a part of his or her personal identity?
The upshot is: Philosophical intuitions are just opinions. Another thing is, of course, whether we do without them.

Recommended literature
Elijah Chudnoff, Intuition. Oxford: Oxford University Press, 2013.

Thursday, June 10, 2021

Random quote
You can deceive some people always. You can deceive all people for some time. However, you cannot deceive all people 

W.F. Wertheim (1907-1998)

Monday, June 07, 2021

Setting goals, also when you are old. In memory of Robert Marchand.

Recently I heard a cabaret performer say when asked what he thought of old age (he is already in his eighties): It’s fine but what I miss most is that I cannot make plans anymore. Saying this, without a doubt (he didn’t explicitly say so) he meant that in the near future you’ll die when you are old and you don’t know when. You have no longer-term perspective anymore, for there is a good chance that you cannot finish what you have started. The future has become short.
It’s true what the cabaret performer said but in a sense it isn’t. A few years ago I wrote a blog about setting targets when you are old (see here). It was a blog about the cyclist Robert Marchand. Marchand, then 105 years old, had set a world record in one-hour track cycling in the over-105 age group, a category especially created for him. Of course, before he did everybody of that age could set such a record, for you just had to ride for one hour on your bike and you had it. Not so Marchand. He didn’t want to set simply a record, but he wanted to ride the best record he could, if possible one that was faster than the time he rode when he set the world record in one-hour track cycling in the over-100 age group. As was to be expected, Marchand didn’t break his 100+ record, but he cycled an unbelievable distance in that hour on the National Velodrome at Saint-Quentin-en-Yvelines near Paris: 22,547 km. Who will ever go faster? But Marchand was a bit disappointed that he didn’t.
Marchand’s record was not only a remarkable time set by a remarkable man, but it refuted also the assertion by the cabaret performer I started this blog with: That there is no future, no perspective, no planning possible when you are old. Every sportsman knows that if you want to achieve a goal and you take it as seriously as Robert Marchand did, you must determine exactly what you want to achieve and you must make a plan. You must plan how to train in order to be able to get a top performance and you must determine a date when to perform. In other words, you must create a perspective for yourself, and that’s what Robert Marchand did.
Now I am the first to say that not everybody is still given to set goals when old. Health differences become enormous above, say, the age of 70. I don’t need to mention here all the diseases, illnesses and infirmities of old age. Many of them simply make setting goals and having a perspective for the future impossible or not sensible. There is no way than making the best of it and undergo the sufferance. On the other hand, many old people are still in good health. Then, as the case of Marchand shows, you can still set goals. Obviously, you are no longer as fit as when you were young. Especially you decline physically. Many people say: Mentally I still feel as if I am thirty but my body works against me. If you would ask me, I would say the same, but actually I doubt whether in my mind I am still that young, but anyway it feels so…
As for the physical decline, there is more to do about it than many people think. Particularly when you are old inactivity will be the end. “Use it or lose it” is an adage that was already known in antiquity. So even at old age, physical exercise is important, or maybe even more important, something that also Marchand knew. So, when not so long ago he stopped outdoor cycling for medical reasons, he continued training on his indoor bike trainer and doing exercises. What many old people don’t know is that by physical training you cannot only slow down the physical decay, but you can even become better! Of course, not when you are already top fit, but certainly when you are on a lower level of your capabilities, there is room for improvement. Moreover, do not look only at the physical side of yourself. Using and exercising your mental capacities, your brain, is as important as staying physically in shape. Use your brain and be open for what is new. A healthy mind in a healthy body. However, the other way round is as much important: A body cannot stay healthy without a healthy mind. Human existence is a whole. Mind and body interact and at the same time they are one. Nonetheless, and in that sense it’s true what the cabaret performer said, gradually your perspective shrinks; naturally. Your goals come – no necessarily must come – nearer in time; naturally. Nobody can surpass human existence. On the 22th of May, 2021, Robert Marchand died, 109 years old. 

Recommended literature
Martha C. Nussbaum; Saul Levmore, Aging Thoughtfully. Conversations about Retirement, Romance, Wrinkles, and Regret. Oxford: Oxford University Press, 2017.

Friday, June 04, 2021

Random quote
Creative people … are those who try to prevent that the whole becomes a harmful routine.

Peter Sloterdijk (1947-)

Monday, May 31, 2021

Facts and what we remember


A rather new branch of philosophy is experimental philosophy. It gathers experimental data in order to test fundamental philosophical questions and suppositions, usually by interviewing non-philosophers in an experimental setting. Actually “experimental philosophy” is a contradiction, for traditionally philosophy is seen as a kind of a priori reasoning – “armchair philosophy” – but experimental philosophy is just a reaction to the idea that truths can be found only by arguing from intuitions. One problem with this is that what is intuitively true for some need not to be so for others. Moreover, philosophers can disagree on the right argumentation leading to a certain conclusion. Etc. Then experimental testing can help to find out what is right and what isn’t.
Although experimental philosophy has forerunners in the sense that late mediaeval / early modern natural philosophy can be seen so, present-day experimental philosophy began around 2000, especially within analytical philosophy. Leading experimental philosophers are Joshua Knobe and Shaun Nichols. In several blogs I have discussed already some results from experimental philosophy. In this blog I want to discuss results from an article by the Lithuanian philosopher Vilius Dranseika about a philosophical intuition on memory. (see Source)
A common Claim among philosophers is (although not all philosophers say so) “that one can be truly said to ‘remember’ some event only if that person originally experienced or observed that event.” (p. 175). It is true that people can be blamed for saying that they remembered to be present at an event or that they took part in it, if it is clear that they weren’t there. Nevertheless, as an experimental study by Dranseika showed, Claim is too strict. For investigating its truth, he distinguished three kinds of memories: true memories (T), quasi-memories (Q) and artificial memories (A). To make things short, Q is for example a dream of something you never experienced but after the dream you think you did. A is for example a chip with someone else’s memories put in your brain. In order to check whether non-philosophers will consider cases of quasi-remembering and artificial memories as cases of remembering Dranseika presented vignettes with variations for T, A and Q to test persons. By way illustration, here is such a vignette:
Imagine it is 2086. Scientists have invented a technology that allows one [to install human memories (for Q and T) / to create artificial memories and to install such memories (for A)] into biological storage devices created for this purpose. This technology also allows one to transfer such memories into the brains of other people. A person, into whose brain such [other people’s (for Q and T) / artificial (for A)] memories are transferred, cannot distinguish such transferred memories from their own memories. Also, no available technologies can distinguish such memories from others. This technology at the moment is experimental and secret, but it is already sometimes used as an educational tool, since it provides an easy way to transfer knowledge that was memorized by another person. It is also sometimes used as a means to improve psychological wellbeing by transferring pleasant [memories of other people (for Q and T) / artificial memories (for A)]
Imagine now that Albertas is a teenager who had a lot of [other people’s (for Q and T) / artificial (for A)] memories transferred into his brain in his childhood. Albertas does not know and has no reason whatsoever to suspect that such memory transfer was performed on him. [Not all his memories, however, are transferred memories of other people. Some of his memories are from the period before memory transfer. (only in T)] One of [the transferred (for Q and A) / such original (for T)] memories is about tasting rowan-berries in childhood. When someone asks Albertas whether he has ever tasted rowan-berries, Albertas replies with confidence: “Yes, I clearly remember eating rowan-berries when I was a child.” (pp. 178-9)
Test persons got vignettes either for the quasi-memory case, or for the artificial memory case or for the true memory case. After having read them, they were asked two questions: “Do you agree that Albertas remembers that he has tasted rowan-berries when he was a child?”; the second question reads “knows” instead of “remembers”. They had to answer on a Likert scale. The result was that the test persons were willing to say that the agent “remembers” both in case of artificial memory and of quasi-memory. This is contrary to Claim, for the artificial and quasi memories in the vignettes were not based on what Albertas truly remembered!
In order to avoid the science fiction scenario in the test just presented, Dranseika made a new vignette and did a new test but for misidentified dreams (one thinks to have experienced a certain event, but in fact had only dreamed it), again with the same result, in this case that the test persons tended to agree that misidentified dreams were memories.
Dranseika makes yet a few refinements in his test, but they do not substantially change his results. So the upshot is (following Dranseika): Claim is not an essential feature of our ordinary use of “remembering” and “having a memory of”. We sometimes say “s/he remembers” while actually s/he doesn’t. We know s/he doesn’t, but we don’t see it as a problem. In view of other studies in the field of experimental philosophy (and not only there) once again there is reason to be skeptical about philosophical intuitions: Philosophers often think things are intuitively the way they claim, but once tested it appears to be nothing more than an opinion among other opinions. 

Vilius Dranseika, “False memories and quasi-memories are memories”, in Tania Lombrozo, Joshua Knobe and Shaun Nichols (eds.), Studies in Experimental Philosophy, Vol. 3. Oxford: Oxford University Press, 2020; pp. 175-188. I have extensively quoted from this article. A preliminary version can be found here.

Thursday, May 27, 2021

Random quote
Our minds are arrangements and activities in matter and energy. But those arrangements, once they exist, are not causes of minds; they are minds. Brain processes are not causes of thoughts and experiences; they are thoughts and experiences.

Peter Godfrey Smith (1965-)

Monday, May 24, 2021

Collective intentionality and the development of man

One of the most discussed issues in present philosophy of action is whether some kind of group intention, collective intention, shared intention or how you would call it exists. If an individual plans an action like going to run tomorrow, we say that she intends to do so or has such an intention. But what if several persons plan to do something together like playing tennis or bridge? Since you cannot do this alone, can we say then that there is a kind of common intention and, if so, what is it? Several answers to this question have been proposed. Especially those by Michael Bratman, Margaret Gilbert, John Searle and Raimo Tuomela are considered important.
In Bratman’s approach to common intentionality each individual has the intention to do his part of the shared goal and knows that the other(s) will do her part or their parts, while moreover the individual action plans mesh. According to Gilbert, if two or more individuals plan to do something together, they have a joint commitment. Each individual can cancel the obligation to contribute to the common task only after the consent of all other participants. According to Searle, collective intention transcends the individual minds “and collective intentions expressed in the form ‘we intend to do such-and-such’, and, ‘we are doing such-and-such’ are … primitive phenomena and cannot be analyzed in terms of individual intentions …” According to Tuomela, individual contributors have a kind of we-intention or joint intention to perform a planned joint action together.
All such tries to solve the problem of common intentionality start from two claims. These are (following Schweikard and Schmid in the Stanford Encyclopedia of Philosophy – SEP):
- Common intentionality is no simple summation, aggregate, or distributive pattern of individual intentionality (the Irreducibility Claim).
- Common intentionality is had by the participating individuals, and all the intentionality an individual has is their own (the Individual Ownership Claim).
As can be seen in the four different approaches just mentioned, the answers to the question what the commonness of common intentionality involves are very different, despite these shared starting points. Again following SEP, some see the commonness in the content of the intention, like Bratman: individual actors strive to do the same together. Others see the commonness in the mode of the intention, like Tuomela: the actor switches from an individual action mode to a we-mode, when he plans to perform an action together with others. Again others see the commonness in the acting subject, like Gilbert: for her a group is a plural subject with its own collective intentional state called joint commitment. Searle’s approach is a kind of mix between the mode-approach and the subject-approach: we-intentions are not individual intentions put together (mode-switch), but the bearer (=subject) of the intention is not the individual but the group.
Most accounts suppose – usually implicitly – that there is one kind of common intentionality and that the question is to find out what it is like. However, the American developmental psychologist Michael Tomasello has put forward a very different idea. For why should there be only one kind of common intentionality and why shouldn’t there be a kind of relationship, for instance developmental relationship, between the different types of common intentionality proposed? From that perspective prehistoric humans long ago first acted only individually together, so to speak: for practical reasons they sought the cooperation of other individuals for performing tasks that together could be done more successfully and effectively than alone (cf Bratman’s approach). Later this developed that way that humans who had made the appointment to work together were obliged to do so in the sense of Gilbert’s joint commitment. One could only withdraw, if the other partners in the job had rescinded the obligation (on sanction of being seen as untrustworthy if one didn’t meet one’s commitments). Later then, more complicated forms of common intentionality as discussed by Searle and Tuomela developed. (by the way, this is my interpretation of Tomasello; he doesn’t exactly say so) Seen this way it’s not the question which approach of common intentionality is the right one, be it the one proposed by Bratman, Gilbert, Tuomela or Searle, or who else has come with an idea what it is like, for the different approaches can be seen in a developmental perspective in the sense that one was followed by another. Moreover, this view on common intentionality makes it quite well possible that currently common intentionality still can have different expressions and can take different shapes. 

- Schweikard, David P., Hans Bernhard Schmid, “Collective Intentionality”, in Stanford Encyclopedia of Philosophy, See here also for literature on Bratman, Gilbert, Searle and Tuomela.
- Searle, John, “Collective Intentions and Actions”, in: P. Cohen, J. Morgan, and M.E. Pollack, (eds.), Intentions in Communication. Cambridge, MA: MIT press, 1990; pp. 401-415.
- Tomasello, Michael, A natural history of human thinking. Cambridge, Mass. Etc.: Harvard University Press, 2014.
- Zahavi, Dan, Glenda Satne, “Varieties of shared intentionality: Tomasello and classical phenomenology”, in: Jeffrey A. Bell,, Beyond the Analytic-Continental Divide. Pluralist Philosophy in the Twenty-First Century. New York: Routledge, 2015; pp. 305-325.

Thursday, May 20, 2021

Random quote
The dimension of the mental, the psychic, is pushed away from the debate, because necessarily it doesn’t fit anywhere in a causal connection.

Gerhard Roth (1942-)

Monday, May 17, 2021

The anti-vaccination fallacy

: “They say that vaccination against Covid-19 will bring the solution. It doesn’t, so I don’t want to take the jab.” Here “they” are the politicians, virologists and all others who urgently advice to have yourself vaccinated against Covid-19. In this blog I don’t want to discuss the question whether this vaccination really will bring the “solution” and will end the pandemic, but there are a few mistakes in the reasoning in Statement – a reasoning I often hear – and this gives me the opportunity to discuss again a few bad arguments and fallacies. Maybe there are more in it, and it’s up to you to discover them, but here I’ll discuss four such bad arguments.

Appeal to ridicule
Gregory L. Bock describes the appeal to ridicule fallacy in this way: “To ridicule a point of view is to disparage or make fun of it. When someone uses ridicule as part of an argument, she commits an appeal to ridicule, which is a fallacy of relevance”, so an attempt “to support a conclusion using an irrelevant premise.” Making fun of a premise doesn’t make this premise and so the whole reasoning false.
Indeed, that’s usually the context in which I hear this false reasoning: Making fun of and also talking down to those who promote vaccination. However, most politicians and virologists know that vaccination cannot make an end to Covid-19. The coronavirus will continuously change and new strains of the virus will develop, but vaccination will at least make the problem manageable by reducing the number of infections and making that vaccinated people become less seriously ill if they get Covid-19. But those who support Statement often have their own simple solutions to the pandemic and ignore that the problem is complicated. They just find it enough to ridicule politicians and virologists who incite people to take the jab. In this sense Statement is also an argumentum ad hominem (see here and here). (see Source, p. 118) 

Implicitly I have discussed also another bad argument in the section above: the fallacy of oversimplification. This fallacy happens “when we attempt to make something appear simpler by ignoring certain relevant complexities”, so Dan Burnett (Source, p. 286). In Statement we see that politicians and virologists are said to see vaccination as the solution, while actually they see vaccination only as a part of the solution to end the pandemic. However, as Burnett ends his description of this fallacy: “When we obscure, ignore, or simply fail to identify certain factors, we run a high risk of misunderstanding reality.” Then “there’s a good chance our actions will – at best – be ineffective, or – at worst – exacerbate the very problem we are trying to solve.” (p. 288)

False dilemma
A false dilemma is reducing a complicated issue to excessively simple terms (see Source p, 346). Often it is reducing a problem to an either-or question. This makes it a kind of black-and-white thinking. In Statement it is either you see vaccination as “the solution” or you don’t take the jab. Nevertheless it can be reasonable to take the jab, although you don’t think that vaccination will bring an end to the pandemic. Being vaccinated will at least diminish the chance that you’ll become – seriously – ill for the time to come. Maybe you must be vaccinated later again, but taking the vaccination now is at least a temporarily solution; anyway for you, even if on a world-scale Covid-19 will not go away. Moreover, taking the jab is only one of the measures in the fight against the coronavirus. Keeping distance is another thing for example, like avoiding large parties, etc. 

Confusing levels
The idea that vaccination against Covid-19 will end the pandemic, as ascribed to politicians and virologists in Statement, is a claim that concerns the pandemic as a whole, but it doesn’t mean that there’ll still no longer be individuals who’ll get Covid-19. But by vaccinating the number of Covid patients will go down that much, so these experts say that we don’t need to speak of a pandemic anymore. Vaccination will make the problem manageable and make that the health services will not be overloaded any longer. This is a claim on national, regional or world-scale, but it is still quite well possible that a relatively small number of individuals will get Covid-19. In order to make that it will not be you who’ll get it, it is reasonable for you to take the jab. What’s true on a higher level (that most people will not become ill) doesn’t need to be true on a lower level (that you will not become ill). 

With the help of Statement I discussed four fallacies that are often heard in discussions. In some respects they overlap, as we could see in my explanations. Actually, at the same time Statement is an oversimplification, a false dilemma and a matter of confusing of levels, while also those who propagate anti-Covid vaccination are ridiculed. If right, together the fallacies discussed can be summarized in the context of the coronavirus pandemic as the “anti-vaccination fallacy”.
But maybe by using Statement as the starting point of this blog, I have ridiculed a substantial group of anti-vaxxers by oversimplifying what they think, by mistakenly saying that they maintain a false dilemma and by asserting that they confuse levels. Nevertheless some do. 

Arp, Robert; Steven Barbone; Michael Bruce (eds.), Bad arguments. 100 of the most important fallacies in Western philosophy. Oxford, etc.: Wiley Blackwell, 2019.

Thursday, May 13, 2021

Random Quote
The word, like a god or a daemon, confronts man not as a creation of his own, but as something existent and significant in its own right, as an objective reality.

Ernst Cassirer (1874-1945)

Monday, May 10, 2021

The second-person perspective

In philosophy (and not only there) we often talk about the first-person perspective. It is the way that I as a subject look at the world and interprets it. Another way of looking is the third-person perspective. It is the way to look at the world in an impersonal way, from a distance, without being involved. We can also speak here of a detached perspective or, using the words of Thomas Nagel, a “view from nowhere”. This view is also called “objective”, since it considers what it perceives as objects, as much as possible without distortion by personal feelings, prejudices, or interpretations. In the same way, the first-person perspective is usually called “subjective”, since the feelings, prejudices and interpretations of the perceiving subject are inherent in this perspective. But if there is a first-person perspective and a third-person perspective, then there should be also a second-person perspective, and indeed there is. To my mind it is quite neglected in philosophy, although the second-perspective is basic to how we as humans live, as, for instance, becomes clear from the works of Michael Tomasello.
In fact, the second-perspective is simple: It is the I’s view of you, the person whom I am dealing with in one way or another. Put yourself in the shoes of the other and then you have the second-person perspective. So simple is it. Simple? Not that much, for it took man a long way through prehistory to come that far. Still today many people have problems with taking the perspective of the other, let alone with taking it into account when dealing with others. But let me first look at what Stephen Darwall says about it, who has thoroughly studied the second-person view.
The “second-person stance is a version of the first-person standpoint”, so Darwall. “It is the perspective one assumes in addressing practical thought or speech to, or acknowledging addresses from, another…” The I sees the other, the you, as his or her equal and because of this gives the other authority and keeps the other accountable for what s/he does, especially towards the I. This concerns for example orders, requests, claims, reproaches, complaints, demands, promises, contracts, givings of consent, commands, etc. If the you accepts the I as an authority in these respects, the I can ask explanation from the you in case s/he fails (or just is successful); or the other way round, of course, for the I-you relationship is reciprocal. It is a relation from person to person. So, “[w]hat the second-person stance excludes is the third-person perspective, that is, regarding, for practical purposes, others (and oneself), not in relation to oneself, but as they are … ‘objectively’ … [I]t rules out as well first-personal thought that lacks an addressing, second-personal aspect.” (pp. 8-10).
Michael Tomasello, who heavily relies on Darwall in this respect, says it this way: “Second-personal engagement has two minimal characteristics: (1) the individual is directly participating in, not observing from outside, the social interaction; and (2) the interaction is with a specific other individual with whom there is a dyadic relationship, not with something more like a group … (3) the essence of this kind of engagement is ‘mutual recognition’ in which each partner gives the other, and expects from the other, a certain amount of respect as an equal individual – a fundamentally cooperative attitude among partners.” (p. 48)
At first sight a mutual relationship based on a second-person perspective seems obvious when two persons meet. However, look around and you’ll see that often it is absent where it should be expected, with nasty consequences. I want to give two examples, which I have both discussed in older blogs albeit in another connection:
The first example is the Stanford prison experiment by Philip Zimbardo (for details see here): In a prison experiment Zimbardo had selected about twenty test subjects and assigned them at random to two groups, one group being the prisoners, the other group being the prison warders. Although there was no initial fundamental difference between the test subjects in both groups, after one-two days both the prisoners and the prison warders acted very differently in a way that went beyond their particular roles: the warders begun to torture the prisoners, psychologically as well as physically. Isn’t it here that we see that at least the members of one party (the warders) forgot that they were dealing with fellow humans (the prisoners) even though they often dealt with them in personal relationships?
We see the same in a modern phenomenon: Contact via the Internet. Here we are in contact with another person, but some aspects of the immediate relationship from person to person are absent, especially when chatting. As the British neuroscientist Susan Greenfield has pointed out: 50% of our communication with other people consists of body language and eye contact, another 30% is done by our voice. The importance of direct body contact like hugging or shaking hands is still unknown. Just such from-person-to-person contacts are often absent when we communicate on the Internet by chatting or in another virtual way. This absence of bodily communication limits our assessment of how other people react to us and restricts our own reactions: We do not see the impact of our words on our conversation partner, so that we hurt him of her unknowingly or even on purpose as often happens. (for details see here) We can also say it this way: Even though we are in touch with the other and relate with each other as an I and a you, because of the imperfect technology the I-you relationship is affected, with possible harmful consequences.
Doesn’t this illustrate how basic the second-person relationship is for understanding how we live?

- Stephen Darwall, The Second-Person Standpoint. Morality, Respect, and Accountability. Cambridge, Mass.: Harvard University Press, 2006.
- Michael Tomasello, A natural history of human thinking. Cambridge, Mass.: Harvard University Press, 2014. 

Friday, May 07, 2021

Random quote
Human thinking is individual improvisation enmeshed in a sociocultural matrix.

Michael Tomasello (1950-)

Monday, May 03, 2021

Montaigne’s Father

Most of us owe much to their parents and so did Montaigne. It is striking, however, that Montaigne’s mother is completely absent in his Essays, while he regularly mentions his father and praises him as the “best father in the world”. Even more, thanks to his father Montaigne got the education that made him the philosopher we know today. So, good reason to ask the question: Who was Montaigne’s father, Pierre Eyquem?
Like later his son, Pierre Eyquem was born in the Château de Montaigne, on 29 September 1495. The castle had been owned by the family since his grandfather Ramon Felipe Eyquem (1402-1478) bought it. Ramon Felipe was a merchant from Bordeaux who had become rich through the trade in fish, wine and indigo. The business was continued by his son Grimon Eyquem (1450-1519). Grimon was also a councillor in the Bordeaux city council for some time. Because he pursued nobility for his family and because a noble could not be a merchant, Grimon Eyquem broke with this family tradition at a later age and decided that his son Pierre should be educated as a knight. Therefore, he sent him to Jean de Durfort, Viscount of Duras, to serve as a page. In 1518 Pierre joined king France I’s army, where he was a soldier for ten years. He joined a company of archers that only accepted noblemen. The army service brought Pierre Eyquem to Italy, where he came into contact with the Renaissance and with Humanism. This gave him all kinds of ideas that would strongly influence him in his actions and thinking, such as in the upbringing of his son Michel.
Back at his castle, Pierre Eyquem began to expand his estate with the purchase of new land. His wife, a strong personality, helped him managing the sites. For in 1529 he had married Antoinette de Louppes de Villeneuve (1514–1603), probably a daughter of a family of Spanish Jews who had been converted  to Christianity (voluntarily or by force) and who had moved to France. The couple had six children, of which Michel was the eldest, apart from two children who had died early. Pierre Eyquem also took part in the administrative life of Bordeaux and held there various important positions. In 1530 he was appointed first jurat and provost of Bordeaux. A jurat was what we would now call an alderman or councillor. Moreover, the jurats chose the mayor. A provost was a kind of tax administrator but also oversaw the management of the city’s buildings and goods and had a number of legal powers. In 1536 Pierre Eyquem was elected deputy mayor of Bordeaux and re-elected provost. In 1554 he was elected mayor. As mayor, he had the particularly difficult task of going to King Henry II to recover the city rights, which Bordeaux had lost in 1548 after a popular uprising against the salt taxes. Pierre Eyquem had to try to reconcile the king with the city. As a gesture of reconciliation, he had brought twenty barrels with Bordeaux wine with him. Some time later Bordeaux did indeed get its old rights back. Although he often stayed in his city house in Bordeaux, Montaigne’s father did not forget his status as a nobleman and chatelain. He continued to physically train himself as a true knight and he received distinguished guests at his castle. One of them was Pierre Bunel, a scholar from Toulouse. On a visit in 1542, Bunel left behind the book Theologia Naturalis by Raymond Sebond as a gift. This work would later exert great influence on young Michel.
In 1554, the year he became mayor, Pierre Eyquem was also allowed to fortify his castle with a wall and towers. Until then it was no more than a large mansion. This was also necessary because of the violent religious discords in the region, which were getting stronger and stronger.
Pierre Eyquem died on 18 June 1568 in Bordeaux, possibly from the effects of a kidney stone attack, an ailment that his son Michel would also suffer from.
Pierre gave his eldest son Michel a special upbringing that was strongly influenced by his contact with the Renaissance and Humanism in Italy. Immediately after his birth, Michel was taken to a nurse in a nearby village. He would stay there for two years. Pierre then decided that his son’s mother tongue would be Latin. This was recommended by Erasmus in his book De Pueris from 1529. Pierre Eyquem appointed a German educator for his son, who did not know French and had to raise him in Latin. Moreover, everyone in the castle had to speak Latin with the young Michel. It was only when Michel went to the prestigious Collège de Guyenne at the age of six to continue his education that he started speaking French again, but because of his knowledge of Latin he was allowed to skip two classes. Although Montaigne says in his Essays that his knowledge of this language later subsided, the level remained sufficient to read Latin works in the original language. In order to prevent that Michel would forget his Latin after his time at the Collège de Guyenne, his father asked him to translate the Theologia Naturalis. Michel did so and published the translation, a year after the death of his father. Sebond’s work would, as said, exert great influence on Montaigne, and he wrote a long treatise on the Theologia Naturalis. It has become by far the longest essay in the Essays. Pierre Eyquem managed it also that his son got a job as a lawyer at the Cour des Aides of Périgueux, a kind of court dealing with tax matters. This institution was later merged with the Parliament of Bordeaux and transferred to that city. Also Montaigne was transferred to Bordeaux. 

- Desan, Philippe, Montaigne. Une biographie politique. Paris: Odile Jacob, 2014
- “Montaigne, Michel de (1533-1592)”,
- “Pierre Eyquem de Montaigne”, in Wikipedia,

Friday, April 30, 2021

Random Quote
Giving grounds, however, justifying the evidence, comes to an end; - but the end is not certain propositions' striking us immediately as true, i.e. it is not a kind of seeing on our part; it is our acting, which lies at the bottom of the language-game.

Ludwig Wittgenstein (1889-1951)

Monday, April 26, 2021


The thought (taken at Reims, France)

Actually, it’s strange. Philosophizing is thinking, but how many philosophers ask themselves what thinking is? How many philosophers wonder what they do, when they think? Of course, there are philosophers who do or did. However, I have never done, neither here in my blogs, nor in other writings. So, it’s time to fill this gap.
According to the Wikipedia “Thought (or thinking) encompasses an ‘aim-oriented flow of ideas and associations that can lead to a reality-oriented conclusion’ ”. But it immediately adds that “there is still no consensus as to how it is adequately defined or understood.” So, I’ll not try to give here a definition of the phenomenon. I’ll immediately jump to the question what it practically involves. In answering this question, I’ll follow Michael Tomasello in his A natural history of human thinking, pp. 27-31. Although he doesn’t tell us what philosophical thinking involves, he leads us to the foundation of thinking as we practice it in daily life. And isn’t this also the foundation of all specialist ways of thinking?
According to Tomasello, thinking has three key components:
1) “The ability to work with some kind of abstract cognitive representations, to which the individual assimilates particular experiences” (p. 27)
2) “The ability to make inferences from cognitive representations”. (28)
3) “The ability to self-monitor the decision-making process”. (30)
Let me explain these components more in detail. In doing so I follow Tomasello, without saying so each time.

Ad 1) Cognitive representations are things like categories, schemas and models. They have three features. They are iconic or imagistic. So, as I interpret this, they refer or point to what they are about. As Tomasello says, What else could they be? In addition, such representations are schematic. They are generalizations or abstractions of the reality as the thinker sees it. The latter makes that the representations are the thinker’s interpretations; they are not reality as such. Moreover, the cognitive representations are situational: They “have as their most basic content situations, specifically, situations that are relevant to the individual’s goals and values.” (27)
Ad 2) Thinking is not only a matter of having cognitive representations, but it involves also making inferences from these representations to what does not exist, does not yet exist or what does exist but is or can not be perceived by the thinker at the moment s/he is thinking about it. These inferences can be causal and intentional and have a logical structure (understanding of cause-effect relations; understanding of logical implications, negation, and the like). They can also be productive in the sense that the thinker can generate off-line situations in her mind and infer or imagine nonactual situations. (28-30)
Ad 3) The ability to self-monitor is more than just taking decisions and anticipating the consequences of these decisions, but it is the ability to decide what one needs to take a decision and whether the information one has is sufficient. It’s a kind of ‘executive’ oversight of the decision process (3). 

Actually what Tomasello discusses in these pages (27-31) refers to the thinking of the great apes. It’s an introduction to the way how great ape thinking developed into human thinking. I must say that in my summary of these pages I haven’t followed Tomasello exactly but I have already anticipated the human thinking and given some of its characteristics. What especially was added to the human thinking was recursive thinking in any shape: thinking about oneself; thinking about one’s own thinking; thinking of the thinking of others and that these others think about you, etc. Also thinking intentionally has further developed. Moreover, human thinking is perspectival: The ability to see others and the world in general through the eyes of another person or from an objective point of view.
To my mind the summary of thinking that I have described above gives a good insight into the basics of the way humans think. Here I want to stress yet especially two important aspects of this human thinking: Its schematical aspect (humans think in schemas or broad categories that structure their worlds) and the typical human recursive aspect of thinking that humans think about thinking. Isn’t this what philosophers do?

- “Thought”, Wikipedia,
- Tomasello, Michael, A natural history of human thinking. Cambridge, Mass. Etc.: Harvard University Press, 2014.

Thursday, April 22, 2021

Random quote
Once call some act a promise, and all questions whether there is an obligation to do it seems to have vanished.

H.A. Prichard (1871-1947) 

Monday, April 19, 2021

Newspeak and AstraZeneca

If you would ask me, which book made the deepest impression on me, probably I would say: George Orwell’s dystopian novel 1984. I read it many years ago, when I was a student, so far before 1984. Since then it’s in my mind. Through the years, I have read many, many books and I have forgotten most of them; maybe not that I had read them, but their contents. However, I have never forgotten the main line of 1984 and what’s important in it. I remember Big Brother, of course, but maybe even more what Orwell called Newspeak.
Oceania, the country Orwell describes in the book, is a dictatorship like North Korea today, although, when writing the book, Orwell had the Soviet Union and Nazi Germany in his mind. In order to be more effective in steering and if possible in determining the thoughts of the inhabitants of Oceania, a new language was developed: Newspeak. It should fully function and be effective in 2050. Then everybody should use it and everybody’s thoughts should be determined by it, maybe with the exception of what the proles would think and say.
You find what Newspeak is like everywhere in the book, but Orwell gives a more systematic description in an annex to the novel, which I’ll use for my explanation. I’ll mention mainly what I need for the second part of this blog. Before I begin, first I’ll quote a description of Newspeak from Wiktionary: Newspeak is “use of ambiguous, misleading, or euphemistic words in order to deceive the listener, especially by politicians and officials.” Note that this is not Orwell’s definition but the meaning “Newspeak” developed in later times among the common public, but keep your thoughts on it when reading what follows.
The purpose of Newspeak is, so Orwell, not only to provide a medium of expression for the world-view and mental habits proper to the inhabitants of Oceania, but to make all other modes of thought impossible. This is done by limiting the use of existing words, by making new words and by making the grammar as simple as possible. As for the first, take the word “free”. The word is not removed from the vocabulary, but in Newspeak it is used only to communicate the absence of something, for instance “The dog is free from lice” or “This field is free of weeds”. The word could not denote free will or political freedom, which supposedly don’t exist in Oceania. Besides giving words a new meaning also new words are formed. Most striking and very important in Newspeak are words constructed with abbreviations, like it was done in Nazi Germany and in the USSR. Think of words like Nazi (from Nationalsozialist=National Socialist), Komintern (Communist International), Gestapo (=Secret [Geheime] State Police), etc. Such words should be simple, staccato and easy to understand and to pronounce. Third, the grammar should be as simple as possible, which I’ll not discuss here.
As said, the function of the new language was to steer and determine the mind, so that the people would think only what the leaders wanted that they must think. The new words and meanings should not enlarge the brainpower but just make it smaller. People should have positive thoughts, when hearing or saying a word, and all other connotations should have been deleted from consciousness. Or, if people should avoid certain actions, the words referring to it (always having un- as the first syllable) just should not include positive connotations (compare the modern word “indecent”). Especially newly constructed words should be a kind of steno summarizing a complex idea in a few syllables and concentrating on a positive meaning (and only on this positive meaning without negative connotations). Everything else shouldn’t even pop up in your mind. Even expressing something else than this positive meaning should have been made impossible by the new construction.
I had to think of all this when I heard that the recently developed Covid-19 vaccine AstraZeneca (AstraZeneca vaccine, for short) has got a new name. From now on it will be called Vaxzevria, which should sound better than the original name. Now you can say “What’s in a name?”, but in view of Orwell’s idea of Newspeak, it is likely that there is more behind this name change, than the simple idea that it sounds better, although the vaccine maker denies this: The change of the name had already been planned for some time, they say. But what has happened? Once seen as one of the vaccines that would free us from the dictatorship of the coronavirus, more and more the AstraZeneca vaccine is getting a bad reputation, mainly because it appears to have often life-threatening side-effects. What’s then the best thing you can do, if you can’t improve the image of your product? Change its name! Chose a name with a positive aura, one that makes forget the negative sense that your product has got, by replacing the old brand name that has come to comprise this negativity. But isn’t doing so (and isn’t in fact every change of a brand name) a try to manipulate the minds of the public by a kind of Newspeak? A try to delete all negative connotations that pop up in your mind, when you hear the words AstraZeneca vaccine? Isn’t it a manner to steer and to determine the people’s minds? If you hear “Vaxzevria”, immediately in your mind it should pop up: “Yes! I want to get it!”? For me, it’s simply Newspeak, and then bad Newspeak, for it’s neither simple, nor staccato, nor easy to understand and to pronounce. Vaxzevria, Wakssefria, Vagshefria, Wakse-Fria, what did you say? 

Thursday, April 15, 2021

Random Quote

We should always prepare for the worst leaders, although we should try, of course, to get the best.

Karl Popper (1902-1994)

Monday, April 12, 2021

The negativity bias and Covid-19

More than ten years ago I wrote a blog about how people judge the side-effects of what someone has done. The essence was that the blame put on someone for causing negative side-effects is by far bigger than the credit s/he receives for positive side-effects, even if they balance. (see for the details my blog Praising the one who deserves it) Although it is not exactly the same, I had to think of it when I heard about the present discussion on the side-effects of the AstraZeneca anti-Covid vaccine. In fact, considering negative effects more important than positive effects is a general human phenomenon. This phenomenon is called the negativity bias. It is “the notion that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things.” (Wikipedia) This is not only so when negative and positive effects balance, but negative effects can by far dominate the positive effects even in case they are much smaller and very little in comparison to the latter. There is a tendency not only to register negative stimuli more readily but also to dwell on these events. (Cherry) The effect is the stronger if what you can lose is bigger, also in case what you can gain is very big. (Kahneman)
Here are two examples from Cherry:
- You received a performance review at work that was quite positive overall and noted your strong performance and achievements. A few constructive comments pointed out areas where you could improve, and you find yourself fixating on those remarks. Rather than feeling good about the positive aspects of your review, you feel upset and angry about the few critical comments.
- You had an argument with your significant other. Afterward, you find yourself focusing on all of your partner’s flaws. Instead of acknowledging your partner’s good points, you ruminate over all of the imperfections. Even the most trivial of faults are amplified, while positive characteristics are overlooked.
Take now the AstraZeneca case. Keep especially the first example in your mind. In order to contain the present coronavirus pandemic, in haste new vaccines have been developed. As everybody knows, medicines can have unintended side-effects and these new vaccines are no exceptions. Moreover, because of the speed that the vaccines have been developed, not all side-effects are already known. Certainly the long-term side-effects aren’t. Therefore it is important to be attentive to possibly unpleasant implications of the vaccines. As it looks at present, the negative effects of most anti-Covid vaccines are minor. An exception appears to be the AstraZeneca vaccine: After having received their jabs, some people got thrombosis and some have died of it. The chance to get it is about one in 150,000 vaccinations, they say. Probably this dramatic effect is caused by the vaccine. Shouldn’t we take the AstraZeneca vaccine because of this side-effect?
As it looks now, we can keep the coronavirus pandemic only under control with a vaccine. Several vaccines have been developed, but at the moment there is a shortage of anti-Covid vaccines and in the immediate future this will remain so. So we need the AstraZeneca vaccine for the time to come. I haven’t looked up the figures, but specialists agree that by far more people will be saved by getting this vaccine than will get thrombosis and die. “Saved” will say here that they will not die of Covid-19 or suffer from serious long-lasting nasty and life-disturbing effects caused by Covid-19, but they would have become ill, if they hadn’t received the AstraZeneca vaccine. So, from a rational point of view it’s by far more sensible to take the jab than to refuse it. Nevertheless, many people don’t want to have it just because of the negative side-effects. Although understandable, in view of what now is known about the side effects, doing so is a clear instance of the negativity bias: Although the positive effect of the AstraZeneca jab by far outbalances the negative side-effects, nevertheless for many people it’s the other way round. Paraphrasing Cherry: Rather than feeling good about the positive aspects of the AstraZeneca vaccine, you feel upset and angry about the small chance that it can harm you. The negative effects are strongly overestimated. Even so, I would say, take another vaccine if you have the choice. 

- Cherry, Kendra, “What Is the Negativity Bias?”,
- Kahneman, Daniel, Thinking, Fast and Slow. Penguin Books, London, 2012; pp. 278-303.
- “Negativity bias”, Wikipedia,

Thursday, April 08, 2021

Random Quote

The position which is in process of becoming superseded wastes its polemical energies on fighting already outmoded features in the opposed view, and tends to see what is retained in the emerging position as only a deformed shadow of its own self.

Georg Henrik von Wright (1916-2003)

Monday, April 05, 2021

Goodhart’s Law and Covid-19

You get an order from a building company to make one ton of nails. What will you do? I guess you’ll produce big nails, for that’s easier and cheaper than producing small nails. From another building company you get an order for one million nails. For them, you’ll produce little nails, I think, for you need less iron to make them. If you behave that way, you follow Goodhart’s Law. Such behaviour is not a figment of my imagination, but it often happened, for example, in the former Soviet Union in order to fulfil to targets imposed by the planning authorities.
Goodhart’s Law was developed in 1975 by Charles Goodhart. He formulated it this way:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
Later Marilyn Strathern gave it its present wording, which is generally used since then:
When a measure becomes a target, it ceases to be a good measure.
Strathern gave the law also its present name. Applied to the introductory case of this blog: The nail factory gets an order but doesn’t ask itself what the customer needs, such as nails of different shapes and sizes. It only wants to execute the order as it is, whether that’s reasonable or not. The customers can avoid such a reaction by specifying the types of nails they need as precise as possible.
Initially, Goodhart’s Law was used to describe economic behaviour. Later it got also a political interpretation and actually it can be applied, if relevant, to any kind of actions.
At the basis of this law is a general sociological phenomenon that has been described by Jürgen Habermas in his theory of two levels of meaning: level 1 and level 0. Level 1 is the level all sciences are faced with when they theoretically interpret their objects of research. Level 0 is typical of those sciences that deal with objects that have been given meaning by the investigated people themselves. This made me distinguish two kinds of meaning: meaning 1 and meaning 0. (see my 2001 for a detailed explanation) Meaning 1 is the kind of meaning used on level 1. It is the meaning a scientist gives to an object, either physical or social in character; it is the scientist’s theoretical interpretation of reality. Meaning 0 is the concept of meaning for the underlying level 0. It is the meaning people who make up social reality give to this social reality or to parts of it themselves; it is their interpretation of their own lived reality. We can apply this two-layer model also to Goodhart’s Law. Then we get this: Level 1 is the level at which the target is set in objective (measurable) quantities, for example by a customer or a policy maker. The people who have to realize the target are at level 0. They give it their own interpretation; one that fits them best. However, this subjective interpretation does not need to be what those who formulated the target thought it should be. If the people who have to realize the target literally take it as it is, it’s quite well possible that the target ceases to be a good measure. If so, Goodhart’s Law applies.
Goodhart’s Law can occur everywhere where quantitative targets are set, and so we see it also appear in the present Covid-19 pandemic. Quantitative targets are set to indicate when the pandemic has become manageable and everything is done to reach them. However, at the same time the negative effects of the measures to achieve the targets are not seen, ignored or pushed away as not important in view of the higher goal to reduce the number of infections. Or the targets are simply avoided by the people by changing their behaviour. As for the latter, for instance, people avoid the curfews imposed by meeting others or by shopping at hours that it is allowed to do so, which leads to more people being together at other moments. As for the former, everywhere all is done to reduce the number of Covid-19 patients, and as such it is a good target, but the price is high. Patients with other serious diseases and illnesses often cannot get the treatments they need, for instance because there is no room for them in the hospitals; in many cases this has led to early deaths. Or, another example, people have died from loneliness in nursery homes and at home because they were and often still are not allowed to meet other people, especially their relevant others. Again other people come to suffer of depressions, often so seriously that they commit suicide. Children suffer because they cannot go to school. Or think of the big negative consequences the restrictions have for the economic lives of many people, making that their quality of life has gone down and probably will remain so for many years.
Now I am the last to say that we should take less care of the Covid-19 patients and that we shouldn’t take measures to stop the pandemic. That’s not the point. The point is that at the moment only the pandemic counts, and politicians pay attention to the negative effects of the restrictions only with words but not with deeds. It’s striking, for instance that the Dutch Outbreak Management Team, which has the task to inform the Dutch government about the pandemic and to propose measures to contain it, has only doctors, virologists and the like among its members but no psychologists and sociologists, who could assess the social and psychological effects of the measures and propose alternatives. Then one must not be surprised that Goodhart’s Law applies and that the anti-Covid measures cease to be good measures.

- Henk bij de Weg, “The commonsense conception and its relation to philosophy”, Philosophical Explorations, 2001/1, pp. 17-30.
- “Goodhart’s Law, in Wikipedia,

Thursday, April 01, 2021

Random quote

The forced imposition of mathematical and mensurative methods has gradually led to a situation in which certain sciences no longer ask what is worth knowing but regard as worth knowing only what is measurable.

                                                                            Karl Mannheim (1893-1947)                                           

Monday, March 29, 2021

Omitting and responsibility

People are responsible for what they do but are they also responsible for what they omit to do? That’s what I want to discuss in this blog.
Omitting is not acting in a situation where you could have acted. Omitting can also be described as allowing that something happens. A man beats his wife and nobody interferes. A child has fallen in a canal and a passer-by who sees it refrains from jumping after her or looking for help. There can be good reasons for doing nothing but if someone refrains from acting where s/he could and should, we call it “omission” and s/he can be blamed for it. If that’s right, we can ask the question of responsibility also in cases of omitting, and not only when someone actively performs an action.
Take now these cases:
Case 1: Victim is drowning and Agent is the only person around. The sea is infested with sharks and Agent just had seen one swimming by. Agent decides not to spring in the water and help Victim, and Victim dies. Since it would have been almost certain that the sharks would have attacked Agent and would have prevented Agent from saving Victim, I think that nobody will blame Agent for omitting to act and hold him responsible for the death of Victim.
Case 2: Same situation but agent does not know that the sea is infested with sharks and he hasn’t seen one. Again Agent decides to do nothing and Victim dies. Is Victim to be blamed for that and to be hold responsible for Victim’s death? Some philosophers say “no”, for the death of Victim couldn’t have been prevented, anyhow. As Willemsen (2020), p. 233 (who doesn’t endorse this view as such) explains: “In order to be morally responsible for the consequences of an omission … the agent needs to be able to perform a relevant action that would have prevented the outcome”, and that’s not the case. Although this sounds reasonable, nevertheless it’s a bit counterintuitive and I think that many readers of this blog will not agree.
What’s the problem then? Why are we hesitating to say that in Case 2 Agent is not responsible for Victim’s death? In order to make this clear, let’s look a bit closer at the cases. Then we see that they are different in an important way. In Case 1 Agent refrained from acting because it would have made no sense to do so and he knew it. If he had tried to save Victim and had sprung in the sea, sharks would have attacked him and might have killed him. Therefore, his omitting to act was involuntarily. Case 2 says nothing about the reasons why Agent didn’t act, but given the description of the case as it is, it would have been reasonable for Agent to spring in the water, for he didn’t know about the sharks. At least he could have tried to save Victim. The sharks would have attacked him and let’s hope that he would have escaped, but Agent in Case 2 didn’t know that this could happen. So, it’s true that Agent in Case 2 cannot be held responsible – at least morally responsible – for the death of Victim, but we can hold him (morally) responsible for not having tried to safe Victim. In Case 1, however, Agent knew already in advance that trying to save Victim would have no sense, and so for him trying was no option. Therefore Agent cannot be held (morally) responsible for not having given it a try in Case 1.
In an important sense we cannot hold Agent responsible for Victim’s death in both cases: He didn’t bring him in his perilous situation. Nonetheless, we can ask whether Agent was morally responsible for Victim’s death in the sense that Agent could have prevented it. When doing so we must realize that “moral responsibility” can be understood in two ways. It can refer to the results or consequences of an action (consequential moral responsibility) or it can refer to the action as such (actional responsibility). In the former sense Agent is not responsible for Victim’s death, neither in Case 1, nor in Case 2. In Case 2, however, we can hold Agent responsible in the latter sense, while in Case 1 nothing can be held against Agent that way. 

Source and inspiration
- Pascale Willemsen, “The Relevance of Alternate Possibilities for Moral Responsibility for Actions and Omissions”, in Tania Lombrozo, Joshua Knobe and Shaun Nichols (eds.), Oxford Studies in Experimental Philosophy. Volume Three. Oxford: Oxford University Press, 2020; pp. 232-274.

Thursday, March 25, 2021

Random quote

If men define situations as real, they are real in their consequences.

W.I.Thomas (1863-1947)

Monday, March 22, 2021

Being responsible for what you do

Whether an agent is responsible for what s/he does depends on whether s/he did what s/he did intentionally or whether what s/he did happened to him or her. We have seen this in my last blog. In my argument I referred to Donald Davidson and, without saying so, I have also made use of what Davidson has written about the subject, although I didn’t fully follow his line of reasoning. Davidson was one of those who discussed the relationship between intentionally acting and responsibility from an analytical philosophical perspective, but as such the theme is already as old as philosophy. Look for example what Aristotle said about it at the beginning of Book III of the Nicomachean Ethics:
“Since virtue is concerned with passions and actions, and on voluntary passions and actions praise and blame are bestowed, on those that are involuntary pardon, and sometimes also pity, to distinguish the voluntary and the involuntary is presumably necessary for those who are studying the nature of virtue, and useful also for legislators with a view to the assigning both of honours and of punishments. Those things, then, are thought involuntary, which take place under compulsion or owing to ignorance; and that is compulsory of which the moving principle is outside, being a principle in which nothing is contributed by the person who is acting or is feeling the passion, e.g. if he were to be carried somewhere by a wind, or by men who had him in their power.” (III 1109b30-1110a4)
So, according to Aristotle, actions are voluntary or they are involuntary and this makes whether we are or are not responsible for them. So simple it is. Or isn’t it? As I have argued in some old blogs (for example in Digging your garden alone or Do pure individual intentions and actions exist?) actions rarely are isolated events but usually they are embedded in or at least depend on what others do or have done. This has also been seen by Aristotle, for next he says:
“With regard to the things that are done from fear of greater evils or for some noble object (e.g. if a tyrant were to order one to do something base, having one's parents and children in his power, and if one did the action they were to be saved, but otherwise would be put to death), it may be debated whether such actions are involuntary or voluntary.” (III 1110a4-8) In other words, you can be forced by the circumstances to do what you don’t like to do, even if in theory you are free to act in a different way, although no one expects you to do so. Usually things are not as simple as a dichotomy can make you think they are. Pure dichotomies are exceptional.
Rather than going on with what Aristotle says about the question, I want to give some examples in order to clarify the present problem a bit (the examples are taken from Manninen 2019).
The first example is rather clear: A person gets in her car and goes driving. Suddenly she gets a stroke and loses the control of her car and causes a collision that results in fatalities. Then the driver is causally responsible for the collision, but most of us will agree that morally she isn’t: The collision happened by factors beyond the driver’s control.
The second example is the much-discussed Eichmann case. Adolf Eichmann was sentenced to death for his contribution to the holocaust. He stated, however (again I follow Manninen here): “There is a need to draw a line between the leaders responsible and the people like me forced to serve as mere instruments in the hands of the leaders.” Orders are orders, aren’t they? Not so, it was judged, and Eichmann was hanged, among other things on account of the Nuremberg Principle IV, saying: “The fact that a person acted pursuant to order of his Government or of a superior does not relieve him from responsibility under international law, provided a moral choice was in fact possible to him.” So, even if you are ordered to do something, you remain personally responsible for the moral consequences of what you do, unless there is reasonably no escape. The “unless” is crucial and gives ground for discussion about when and whether an agent really has acted freely. That’s why the Eichmann case has been so much discussed.
The upshot is that responsibility for what you do often depends on the context in which the deed is done, for the context often makes whether what you do is an intentional action or something that happens to you. Whether a deed is intentional depends on how it seems to others, rightly or mistakenly. 

- Aristotle, Nicomachean Ethics,
- Tuomas W. Manninen, “Diminished Responsibility”, in Arp, Robert; Steven Barbone; Michael Bruce (eds.), Bad arguments. 100 of the most important fallacies in Western philosophy. Oxford, etc.: Wiley Blackwell, 2019; pp. 145-148.