Share on Facebook

Friday, May 07, 2021

Random quote
Human thinking is individual improvisation enmeshed in a sociocultural matrix.

Michael Tomasello (1950-)

Monday, May 03, 2021

Montaigne’s Father


Most of us owe much to their parents and so did Montaigne. It is striking, however, that Montaigne’s mother is completely absent in his Essays, while he regularly mentions his father and praises him as the “best father in the world”. Even more, thanks to his father Montaigne got the education that made him the philosopher we know today. So, good reason to ask the question: Who was Montaigne’s father, Pierre Eyquem?
Like later his son, Pierre Eyquem was born in the Château de Montaigne, on 29 September 1495. The castle had been owned by the family since his grandfather Ramon Felipe Eyquem (1402-1478) bought it. Ramon Felipe was a merchant from Bordeaux who had become rich through the trade in fish, wine and indigo. The business was continued by his son Grimon Eyquem (1450-1519). Grimon was also a councillor in the Bordeaux city council for some time. Because he pursued nobility for his family and because a noble could not be a merchant, Grimon Eyquem broke with this family tradition at a later age and decided that his son Pierre should be educated as a knight. Therefore, he sent him to Jean de Durfort, Viscount of Duras, to serve as a page. In 1518 Pierre joined king France I’s army, where he was a soldier for ten years. He joined a company of archers that only accepted noblemen. The army service brought Pierre Eyquem to Italy, where he came into contact with the Renaissance and with Humanism. This gave him all kinds of ideas that would strongly influence him in his actions and thinking, such as in the upbringing of his son Michel.
Back at his castle, Pierre Eyquem began to expand his estate with the purchase of new land. His wife, a strong personality, helped him managing the sites. For in 1529 he had married Antoinette de Louppes de Villeneuve (1514–1603), probably a daughter of a family of Spanish Jews who had been converted  to Christianity (voluntarily or by force) and who had moved to France. The couple had six children, of which Michel was the eldest, apart from two children who had died early. Pierre Eyquem also took part in the administrative life of Bordeaux and held there various important positions. In 1530 he was appointed first jurat and provost of Bordeaux. A jurat was what we would now call an alderman or councillor. Moreover, the jurats chose the mayor. A provost was a kind of tax administrator but also oversaw the management of the city’s buildings and goods and had a number of legal powers. In 1536 Pierre Eyquem was elected deputy mayor of Bordeaux and re-elected provost. In 1554 he was elected mayor. As mayor, he had the particularly difficult task of going to King Henry II to recover the city rights, which Bordeaux had lost in 1548 after a popular uprising against the salt taxes. Pierre Eyquem had to try to reconcile the king with the city. As a gesture of reconciliation, he had brought twenty barrels with Bordeaux wine with him. Some time later Bordeaux did indeed get its old rights back. Although he often stayed in his city house in Bordeaux, Montaigne’s father did not forget his status as a nobleman and chatelain. He continued to physically train himself as a true knight and he received distinguished guests at his castle. One of them was Pierre Bunel, a scholar from Toulouse. On a visit in 1542, Bunel left behind the book Theologia Naturalis by Raymond Sebond as a gift. This work would later exert great influence on young Michel.
In 1554, the year he became mayor, Pierre Eyquem was also allowed to fortify his castle with a wall and towers. Until then it was no more than a large mansion. This was also necessary because of the violent religious discords in the region, which were getting stronger and stronger.
Pierre Eyquem died on 18 June 1568 in Bordeaux, possibly from the effects of a kidney stone attack, an ailment that his son Michel would also suffer from.
Pierre gave his eldest son Michel a special upbringing that was strongly influenced by his contact with the Renaissance and Humanism in Italy. Immediately after his birth, Michel was taken to a nurse in a nearby village. He would stay there for two years. Pierre then decided that his son’s mother tongue would be Latin. This was recommended by Erasmus in his book De Pueris from 1529. Pierre Eyquem appointed a German educator for his son, who did not know French and had to raise him in Latin. Moreover, everyone in the castle had to speak Latin with the young Michel. It was only when Michel went to the prestigious Collège de Guyenne at the age of six to continue his education that he started speaking French again, but because of his knowledge of Latin he was allowed to skip two classes. Although Montaigne says in his Essays that his knowledge of this language later subsided, the level remained sufficient to read Latin works in the original language. In order to prevent that Michel would forget his Latin after his time at the Collège de Guyenne, his father asked him to translate the Theologia Naturalis. Michel did so and published the translation, a year after the death of his father. Sebond’s work would, as said, exert great influence on Montaigne, and he wrote a long treatise on the Theologia Naturalis. It has become by far the longest essay in the Essays. Pierre Eyquem managed it also that his son got a job as a lawyer at the Cour des Aides of Périgueux, a kind of court dealing with tax matters. This institution was later merged with the Parliament of Bordeaux and transferred to that city. Also Montaigne was transferred to Bordeaux. 

Sources
- Desan, Philippe, Montaigne. Une biographie politique. Paris: Odile Jacob, 2014
- “Montaigne, Michel de (1533-1592)”, https://mediatheque.sainthilairederiez.fr/node/597440?&from=/node/597440
- “Pierre Eyquem de Montaigne”, in Wikipedia, https://de.wikipedia.org/wiki/Pierre_Eyquem_de_Montaigne

Friday, April 30, 2021

Random Quote
Giving grounds, however, justifying the evidence, comes to an end; - but the end is not certain propositions' striking us immediately as true, i.e. it is not a kind of seeing on our part; it is our acting, which lies at the bottom of the language-game.

Ludwig Wittgenstein (1889-1951)

Monday, April 26, 2021

Thinking

The thought (taken at Reims, France)

Actually, it’s strange. Philosophizing is thinking, but how many philosophers ask themselves what thinking is? How many philosophers wonder what they do, when they think? Of course, there are philosophers who do or did. However, I have never done, neither here in my blogs, nor in other writings. So, it’s time to fill this gap.
According to the Wikipedia “Thought (or thinking) encompasses an ‘aim-oriented flow of ideas and associations that can lead to a reality-oriented conclusion’ ”. But it immediately adds that “there is still no consensus as to how it is adequately defined or understood.” So, I’ll not try to give here a definition of the phenomenon. I’ll immediately jump to the question what it practically involves. In answering this question, I’ll follow Michael Tomasello in his A natural history of human thinking, pp. 27-31. Although he doesn’t tell us what philosophical thinking involves, he leads us to the foundation of thinking as we practice it in daily life. And isn’t this also the foundation of all specialist ways of thinking?
According to Tomasello, thinking has three key components:
1) “The ability to work with some kind of abstract cognitive representations, to which the individual assimilates particular experiences” (p. 27)
2) “The ability to make inferences from cognitive representations”. (28)
3) “The ability to self-monitor the decision-making process”. (30)
Let me explain these components more in detail. In doing so I follow Tomasello, without saying so each time.

Ad 1) Cognitive representations are things like categories, schemas and models. They have three features. They are iconic or imagistic. So, as I interpret this, they refer or point to what they are about. As Tomasello says, What else could they be? In addition, such representations are schematic. They are generalizations or abstractions of the reality as the thinker sees it. The latter makes that the representations are the thinker’s interpretations; they are not reality as such. Moreover, the cognitive representations are situational: They “have as their most basic content situations, specifically, situations that are relevant to the individual’s goals and values.” (27)
Ad 2) Thinking is not only a matter of having cognitive representations, but it involves also making inferences from these representations to what does not exist, does not yet exist or what does exist but is or can not be perceived by the thinker at the moment s/he is thinking about it. These inferences can be causal and intentional and have a logical structure (understanding of cause-effect relations; understanding of logical implications, negation, and the like). They can also be productive in the sense that the thinker can generate off-line situations in her mind and infer or imagine nonactual situations. (28-30)
Ad 3) The ability to self-monitor is more than just taking decisions and anticipating the consequences of these decisions, but it is the ability to decide what one needs to take a decision and whether the information one has is sufficient. It’s a kind of ‘executive’ oversight of the decision process (3). 

Actually what Tomasello discusses in these pages (27-31) refers to the thinking of the great apes. It’s an introduction to the way how great ape thinking developed into human thinking. I must say that in my summary of these pages I haven’t followed Tomasello exactly but I have already anticipated the human thinking and given some of its characteristics. What especially was added to the human thinking was recursive thinking in any shape: thinking about oneself; thinking about one’s own thinking; thinking of the thinking of others and that these others think about you, etc. Also thinking intentionally has further developed. Moreover, human thinking is perspectival: The ability to see others and the world in general through the eyes of another person or from an objective point of view.
To my mind the summary of thinking that I have described above gives a good insight into the basics of the way humans think. Here I want to stress yet especially two important aspects of this human thinking: Its schematical aspect (humans think in schemas or broad categories that structure their worlds) and the typical human recursive aspect of thinking that humans think about thinking. Isn’t this what philosophers do?

Sources
- “Thought”, Wikipedia, https://en.wikipedia.org/wiki/Thought
- Tomasello, Michael, A natural history of human thinking. Cambridge, Mass. Etc.: Harvard University Press, 2014.

Thursday, April 22, 2021

Random quote
Once call some act a promise, and all questions whether there is an obligation to do it seems to have vanished.

H.A. Prichard (1871-1947) 

Monday, April 19, 2021

Newspeak and AstraZeneca


If you would ask me, which book made the deepest impression on me, probably I would say: George Orwell’s dystopian novel 1984. I read it many years ago, when I was a student, so far before 1984. Since then it’s in my mind. Through the years, I have read many, many books and I have forgotten most of them; maybe not that I had read them, but their contents. However, I have never forgotten the main line of 1984 and what’s important in it. I remember Big Brother, of course, but maybe even more what Orwell called Newspeak.
Oceania, the country Orwell describes in the book, is a dictatorship like North Korea today, although, when writing the book, Orwell had the Soviet Union and Nazi Germany in his mind. In order to be more effective in steering and if possible in determining the thoughts of the inhabitants of Oceania, a new language was developed: Newspeak. It should fully function and be effective in 2050. Then everybody should use it and everybody’s thoughts should be determined by it, maybe with the exception of what the proles would think and say.
You find what Newspeak is like everywhere in the book, but Orwell gives a more systematic description in an annex to the novel, which I’ll use for my explanation. I’ll mention mainly what I need for the second part of this blog. Before I begin, first I’ll quote a description of Newspeak from Wiktionary: Newspeak is “use of ambiguous, misleading, or euphemistic words in order to deceive the listener, especially by politicians and officials.” Note that this is not Orwell’s definition but the meaning “Newspeak” developed in later times among the common public, but keep your thoughts on it when reading what follows.
The purpose of Newspeak is, so Orwell, not only to provide a medium of expression for the world-view and mental habits proper to the inhabitants of Oceania, but to make all other modes of thought impossible. This is done by limiting the use of existing words, by making new words and by making the grammar as simple as possible. As for the first, take the word “free”. The word is not removed from the vocabulary, but in Newspeak it is used only to communicate the absence of something, for instance “The dog is free from lice” or “This field is free of weeds”. The word could not denote free will or political freedom, which supposedly don’t exist in Oceania. Besides giving words a new meaning also new words are formed. Most striking and very important in Newspeak are words constructed with abbreviations, like it was done in Nazi Germany and in the USSR. Think of words like Nazi (from Nationalsozialist=National Socialist), Komintern (Communist International), Gestapo (=Secret [Geheime] State Police), etc. Such words should be simple, staccato and easy to understand and to pronounce. Third, the grammar should be as simple as possible, which I’ll not discuss here.
As said, the function of the new language was to steer and determine the mind, so that the people would think only what the leaders wanted that they must think. The new words and meanings should not enlarge the brainpower but just make it smaller. People should have positive thoughts, when hearing or saying a word, and all other connotations should have been deleted from consciousness. Or, if people should avoid certain actions, the words referring to it (always having un- as the first syllable) just should not include positive connotations (compare the modern word “indecent”). Especially newly constructed words should be a kind of steno summarizing a complex idea in a few syllables and concentrating on a positive meaning (and only on this positive meaning without negative connotations). Everything else shouldn’t even pop up in your mind. Even expressing something else than this positive meaning should have been made impossible by the new construction.
I had to think of all this when I heard that the recently developed Covid-19 vaccine AstraZeneca (AstraZeneca vaccine, for short) has got a new name. From now on it will be called Vaxzevria, which should sound better than the original name. Now you can say “What’s in a name?”, but in view of Orwell’s idea of Newspeak, it is likely that there is more behind this name change, than the simple idea that it sounds better, although the vaccine maker denies this: The change of the name had already been planned for some time, they say. But what has happened? Once seen as one of the vaccines that would free us from the dictatorship of the coronavirus, more and more the AstraZeneca vaccine is getting a bad reputation, mainly because it appears to have often life-threatening side-effects. What’s then the best thing you can do, if you can’t improve the image of your product? Change its name! Chose a name with a positive aura, one that makes forget the negative sense that your product has got, by replacing the old brand name that has come to comprise this negativity. But isn’t doing so (and isn’t in fact every change of a brand name) a try to manipulate the minds of the public by a kind of Newspeak? A try to delete all negative connotations that pop up in your mind, when you hear the words AstraZeneca vaccine? Isn’t it a manner to steer and to determine the people’s minds? If you hear “Vaxzevria”, immediately in your mind it should pop up: “Yes! I want to get it!”? For me, it’s simply Newspeak, and then bad Newspeak, for it’s neither simple, nor staccato, nor easy to understand and to pronounce. Vaxzevria, Wakssefria, Vagshefria, Wakse-Fria, what did you say? 

Thursday, April 15, 2021

Random Quote

We should always prepare for the worst leaders, although we should try, of course, to get the best.

Karl Popper (1902-1994)

Monday, April 12, 2021

The negativity bias and Covid-19

More than ten years ago I wrote a blog about how people judge the side-effects of what someone has done. The essence was that the blame put on someone for causing negative side-effects is by far bigger than the credit s/he receives for positive side-effects, even if they balance. (see for the details my blog Praising the one who deserves it) Although it is not exactly the same, I had to think of it when I heard about the present discussion on the side-effects of the AstraZeneca anti-Covid vaccine. In fact, considering negative effects more important than positive effects is a general human phenomenon. This phenomenon is called the negativity bias. It is “the notion that, even when of equal intensity, things of a more negative nature (e.g. unpleasant thoughts, emotions, or social interactions; harmful/traumatic events) have a greater effect on one’s psychological state and processes than neutral or positive things.” (Wikipedia) This is not only so when negative and positive effects balance, but negative effects can by far dominate the positive effects even in case they are much smaller and very little in comparison to the latter. There is a tendency not only to register negative stimuli more readily but also to dwell on these events. (Cherry) The effect is the stronger if what you can lose is bigger, also in case what you can gain is very big. (Kahneman)
Here are two examples from Cherry:
- You received a performance review at work that was quite positive overall and noted your strong performance and achievements. A few constructive comments pointed out areas where you could improve, and you find yourself fixating on those remarks. Rather than feeling good about the positive aspects of your review, you feel upset and angry about the few critical comments.
- You had an argument with your significant other. Afterward, you find yourself focusing on all of your partner’s flaws. Instead of acknowledging your partner’s good points, you ruminate over all of the imperfections. Even the most trivial of faults are amplified, while positive characteristics are overlooked.
Take now the AstraZeneca case. Keep especially the first example in your mind. In order to contain the present coronavirus pandemic, in haste new vaccines have been developed. As everybody knows, medicines can have unintended side-effects and these new vaccines are no exceptions. Moreover, because of the speed that the vaccines have been developed, not all side-effects are already known. Certainly the long-term side-effects aren’t. Therefore it is important to be attentive to possibly unpleasant implications of the vaccines. As it looks at present, the negative effects of most anti-Covid vaccines are minor. An exception appears to be the AstraZeneca vaccine: After having received their jabs, some people got thrombosis and some have died of it. The chance to get it is about one in 150,000 vaccinations, they say. Probably this dramatic effect is caused by the vaccine. Shouldn’t we take the AstraZeneca vaccine because of this side-effect?
As it looks now, we can keep the coronavirus pandemic only under control with a vaccine. Several vaccines have been developed, but at the moment there is a shortage of anti-Covid vaccines and in the immediate future this will remain so. So we need the AstraZeneca vaccine for the time to come. I haven’t looked up the figures, but specialists agree that by far more people will be saved by getting this vaccine than will get thrombosis and die. “Saved” will say here that they will not die of Covid-19 or suffer from serious long-lasting nasty and life-disturbing effects caused by Covid-19, but they would have become ill, if they hadn’t received the AstraZeneca vaccine. So, from a rational point of view it’s by far more sensible to take the jab than to refuse it. Nevertheless, many people don’t want to have it just because of the negative side-effects. Although understandable, in view of what now is known about the side effects, doing so is a clear instance of the negativity bias: Although the positive effect of the AstraZeneca jab by far outbalances the negative side-effects, nevertheless for many people it’s the other way round. Paraphrasing Cherry: Rather than feeling good about the positive aspects of the AstraZeneca vaccine, you feel upset and angry about the small chance that it can harm you. The negative effects are strongly overestimated. Even so, I would say, take another vaccine if you have the choice. 

Sources
- Cherry, Kendra, “What Is the Negativity Bias?”, https://www.verywellmind.com/negative-bias-4589618
- Kahneman, Daniel, Thinking, Fast and Slow. Penguin Books, London, 2012; pp. 278-303.
- “Negativity bias”, Wikipedia, https://en.wikipedia.org/wiki/Negativity_bias

Thursday, April 08, 2021

Random Quote

The position which is in process of becoming superseded wastes its polemical energies on fighting already outmoded features in the opposed view, and tends to see what is retained in the emerging position as only a deformed shadow of its own self.

Georg Henrik von Wright (1916-2003)

Monday, April 05, 2021

Goodhart’s Law and Covid-19


You get an order from a building company to make one ton of nails. What will you do? I guess you’ll produce big nails, for that’s easier and cheaper than producing small nails. From another building company you get an order for one million nails. For them, you’ll produce little nails, I think, for you need less iron to make them. If you behave that way, you follow Goodhart’s Law. Such behaviour is not a figment of my imagination, but it often happened, for example, in the former Soviet Union in order to fulfil to targets imposed by the planning authorities.
Goodhart’s Law was developed in 1975 by Charles Goodhart. He formulated it this way:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
Later Marilyn Strathern gave it its present wording, which is generally used since then:
When a measure becomes a target, it ceases to be a good measure.
Strathern gave the law also its present name. Applied to the introductory case of this blog: The nail factory gets an order but doesn’t ask itself what the customer needs, such as nails of different shapes and sizes. It only wants to execute the order as it is, whether that’s reasonable or not. The customers can avoid such a reaction by specifying the types of nails they need as precise as possible.
Initially, Goodhart’s Law was used to describe economic behaviour. Later it got also a political interpretation and actually it can be applied, if relevant, to any kind of actions.
At the basis of this law is a general sociological phenomenon that has been described by Jürgen Habermas in his theory of two levels of meaning: level 1 and level 0. Level 1 is the level all sciences are faced with when they theoretically interpret their objects of research. Level 0 is typical of those sciences that deal with objects that have been given meaning by the investigated people themselves. This made me distinguish two kinds of meaning: meaning 1 and meaning 0. (see my 2001 for a detailed explanation) Meaning 1 is the kind of meaning used on level 1. It is the meaning a scientist gives to an object, either physical or social in character; it is the scientist’s theoretical interpretation of reality. Meaning 0 is the concept of meaning for the underlying level 0. It is the meaning people who make up social reality give to this social reality or to parts of it themselves; it is their interpretation of their own lived reality. We can apply this two-layer model also to Goodhart’s Law. Then we get this: Level 1 is the level at which the target is set in objective (measurable) quantities, for example by a customer or a policy maker. The people who have to realize the target are at level 0. They give it their own interpretation; one that fits them best. However, this subjective interpretation does not need to be what those who formulated the target thought it should be. If the people who have to realize the target literally take it as it is, it’s quite well possible that the target ceases to be a good measure. If so, Goodhart’s Law applies.
Goodhart’s Law can occur everywhere where quantitative targets are set, and so we see it also appear in the present Covid-19 pandemic. Quantitative targets are set to indicate when the pandemic has become manageable and everything is done to reach them. However, at the same time the negative effects of the measures to achieve the targets are not seen, ignored or pushed away as not important in view of the higher goal to reduce the number of infections. Or the targets are simply avoided by the people by changing their behaviour. As for the latter, for instance, people avoid the curfews imposed by meeting others or by shopping at hours that it is allowed to do so, which leads to more people being together at other moments. As for the former, everywhere all is done to reduce the number of Covid-19 patients, and as such it is a good target, but the price is high. Patients with other serious diseases and illnesses often cannot get the treatments they need, for instance because there is no room for them in the hospitals; in many cases this has led to early deaths. Or, another example, people have died from loneliness in nursery homes and at home because they were and often still are not allowed to meet other people, especially their relevant others. Again other people come to suffer of depressions, often so seriously that they commit suicide. Children suffer because they cannot go to school. Or think of the big negative consequences the restrictions have for the economic lives of many people, making that their quality of life has gone down and probably will remain so for many years.
Now I am the last to say that we should take less care of the Covid-19 patients and that we shouldn’t take measures to stop the pandemic. That’s not the point. The point is that at the moment only the pandemic counts, and politicians pay attention to the negative effects of the restrictions only with words but not with deeds. It’s striking, for instance that the Dutch Outbreak Management Team, which has the task to inform the Dutch government about the pandemic and to propose measures to contain it, has only doctors, virologists and the like among its members but no psychologists and sociologists, who could assess the social and psychological effects of the measures and propose alternatives. Then one must not be surprised that Goodhart’s Law applies and that the anti-Covid measures cease to be good measures.

Sources
- Henk bij de Weg, “The commonsense conception and its relation to philosophy”, Philosophical Explorations, 2001/1, pp. 17-30.
- “Goodhart’s Law, in Wikipedia, https://en.wikipedia.org/wiki/Goodhart%27s_law

Thursday, April 01, 2021

Random quote

The forced imposition of mathematical and mensurative methods has gradually led to a situation in which certain sciences no longer ask what is worth knowing but regard as worth knowing only what is measurable.

                                                                            Karl Mannheim (1893-1947)                                           

Monday, March 29, 2021

Omitting and responsibility


People are responsible for what they do but are they also responsible for what they omit to do? That’s what I want to discuss in this blog.
Omitting is not acting in a situation where you could have acted. Omitting can also be described as allowing that something happens. A man beats his wife and nobody interferes. A child has fallen in a canal and a passer-by who sees it refrains from jumping after her or looking for help. There can be good reasons for doing nothing but if someone refrains from acting where s/he could and should, we call it “omission” and s/he can be blamed for it. If that’s right, we can ask the question of responsibility also in cases of omitting, and not only when someone actively performs an action.
Take now these cases:
Case 1: Victim is drowning and Agent is the only person around. The sea is infested with sharks and Agent just had seen one swimming by. Agent decides not to spring in the water and help Victim, and Victim dies. Since it would have been almost certain that the sharks would have attacked Agent and would have prevented Agent from saving Victim, I think that nobody will blame Agent for omitting to act and hold him responsible for the death of Victim.
Case 2: Same situation but agent does not know that the sea is infested with sharks and he hasn’t seen one. Again Agent decides to do nothing and Victim dies. Is Victim to be blamed for that and to be hold responsible for Victim’s death? Some philosophers say “no”, for the death of Victim couldn’t have been prevented, anyhow. As Willemsen (2020), p. 233 (who doesn’t endorse this view as such) explains: “In order to be morally responsible for the consequences of an omission … the agent needs to be able to perform a relevant action that would have prevented the outcome”, and that’s not the case. Although this sounds reasonable, nevertheless it’s a bit counterintuitive and I think that many readers of this blog will not agree.
What’s the problem then? Why are we hesitating to say that in Case 2 Agent is not responsible for Victim’s death? In order to make this clear, let’s look a bit closer at the cases. Then we see that they are different in an important way. In Case 1 Agent refrained from acting because it would have made no sense to do so and he knew it. If he had tried to save Victim and had sprung in the sea, sharks would have attacked him and might have killed him. Therefore, his omitting to act was involuntarily. Case 2 says nothing about the reasons why Agent didn’t act, but given the description of the case as it is, it would have been reasonable for Agent to spring in the water, for he didn’t know about the sharks. At least he could have tried to save Victim. The sharks would have attacked him and let’s hope that he would have escaped, but Agent in Case 2 didn’t know that this could happen. So, it’s true that Agent in Case 2 cannot be held responsible – at least morally responsible – for the death of Victim, but we can hold him (morally) responsible for not having tried to safe Victim. In Case 1, however, Agent knew already in advance that trying to save Victim would have no sense, and so for him trying was no option. Therefore Agent cannot be held (morally) responsible for not having given it a try in Case 1.
In an important sense we cannot hold Agent responsible for Victim’s death in both cases: He didn’t bring him in his perilous situation. Nonetheless, we can ask whether Agent was morally responsible for Victim’s death in the sense that Agent could have prevented it. When doing so we must realize that “moral responsibility” can be understood in two ways. It can refer to the results or consequences of an action (consequential moral responsibility) or it can refer to the action as such (actional responsibility). In the former sense Agent is not responsible for Victim’s death, neither in Case 1, nor in Case 2. In Case 2, however, we can hold Agent responsible in the latter sense, while in Case 1 nothing can be held against Agent that way. 

Source and inspiration
- Pascale Willemsen, “The Relevance of Alternate Possibilities for Moral Responsibility for Actions and Omissions”, in Tania Lombrozo, Joshua Knobe and Shaun Nichols (eds.), Oxford Studies in Experimental Philosophy. Volume Three. Oxford: Oxford University Press, 2020; pp. 232-274.

Thursday, March 25, 2021

Random quote

If men define situations as real, they are real in their consequences.

W.I.Thomas (1863-1947)

Monday, March 22, 2021

Being responsible for what you do


Whether an agent is responsible for what s/he does depends on whether s/he did what s/he did intentionally or whether what s/he did happened to him or her. We have seen this in my last blog. In my argument I referred to Donald Davidson and, without saying so, I have also made use of what Davidson has written about the subject, although I didn’t fully follow his line of reasoning. Davidson was one of those who discussed the relationship between intentionally acting and responsibility from an analytical philosophical perspective, but as such the theme is already as old as philosophy. Look for example what Aristotle said about it at the beginning of Book III of the Nicomachean Ethics:
“Since virtue is concerned with passions and actions, and on voluntary passions and actions praise and blame are bestowed, on those that are involuntary pardon, and sometimes also pity, to distinguish the voluntary and the involuntary is presumably necessary for those who are studying the nature of virtue, and useful also for legislators with a view to the assigning both of honours and of punishments. Those things, then, are thought involuntary, which take place under compulsion or owing to ignorance; and that is compulsory of which the moving principle is outside, being a principle in which nothing is contributed by the person who is acting or is feeling the passion, e.g. if he were to be carried somewhere by a wind, or by men who had him in their power.” (III 1109b30-1110a4)
So, according to Aristotle, actions are voluntary or they are involuntary and this makes whether we are or are not responsible for them. So simple it is. Or isn’t it? As I have argued in some old blogs (for example in Digging your garden alone or Do pure individual intentions and actions exist?) actions rarely are isolated events but usually they are embedded in or at least depend on what others do or have done. This has also been seen by Aristotle, for next he says:
“With regard to the things that are done from fear of greater evils or for some noble object (e.g. if a tyrant were to order one to do something base, having one's parents and children in his power, and if one did the action they were to be saved, but otherwise would be put to death), it may be debated whether such actions are involuntary or voluntary.” (III 1110a4-8) In other words, you can be forced by the circumstances to do what you don’t like to do, even if in theory you are free to act in a different way, although no one expects you to do so. Usually things are not as simple as a dichotomy can make you think they are. Pure dichotomies are exceptional.
Rather than going on with what Aristotle says about the question, I want to give some examples in order to clarify the present problem a bit (the examples are taken from Manninen 2019).
The first example is rather clear: A person gets in her car and goes driving. Suddenly she gets a stroke and loses the control of her car and causes a collision that results in fatalities. Then the driver is causally responsible for the collision, but most of us will agree that morally she isn’t: The collision happened by factors beyond the driver’s control.
The second example is the much-discussed Eichmann case. Adolf Eichmann was sentenced to death for his contribution to the holocaust. He stated, however (again I follow Manninen here): “There is a need to draw a line between the leaders responsible and the people like me forced to serve as mere instruments in the hands of the leaders.” Orders are orders, aren’t they? Not so, it was judged, and Eichmann was hanged, among other things on account of the Nuremberg Principle IV, saying: “The fact that a person acted pursuant to order of his Government or of a superior does not relieve him from responsibility under international law, provided a moral choice was in fact possible to him.” So, even if you are ordered to do something, you remain personally responsible for the moral consequences of what you do, unless there is reasonably no escape. The “unless” is crucial and gives ground for discussion about when and whether an agent really has acted freely. That’s why the Eichmann case has been so much discussed.
The upshot is that responsibility for what you do often depends on the context in which the deed is done, for the context often makes whether what you do is an intentional action or something that happens to you. Whether a deed is intentional depends on how it seems to others, rightly or mistakenly. 

Sources
- Aristotle, Nicomachean Ethics, http://classics.mit.edu/Aristotle/nicomachaen.mb.txt.
- Tuomas W. Manninen, “Diminished Responsibility”, in Arp, Robert; Steven Barbone; Michael Bruce (eds.), Bad arguments. 100 of the most important fallacies in Western philosophy. Oxford, etc.: Wiley Blackwell, 2019; pp. 145-148.

Thursday, March 11, 2021

Random Quote

Every society as a whole learns that happiness cannot be equated with development.

Michel de Certeau (1925-1986)

Monday, March 08, 2021

Action, deed and responsibility


The question what an action is has long been a hot topic in action theory. Recently yet, I have written a blog about it (see my blog “
What is an action?”). But take now this case: A hired assassin kills the wrong person, a thing that now and then happens. To keep my example simple, let’s say that the assassin kills by mistake a passer-by, when pointing at his intended victim. Then the question is: What action did the shooter perform? This is not only a philosophical question, but it is also important to know if we want to ascribe responsibility and administer punishment.Generally we can say that an action is a piece of behaviour with an intention, as I have explained in my blog just mentioned. But suppose that a policeman pushed the shooter in his back at the moment that the man pulled the trigger. It made that the shooter missed his intended victim and killed by accident the passer-by. Can we attribute then the killing of the passer-by to the shooter? For isn’t it so that the shooter did not intend to do so and that he didn’t aim at the passer-by and that he was pushed in his back?Take this: A woman has a cup of coffee in her hand and she spills the coffee. She can have done it intentionally, she can have done it unintentionally, someone pushed intentionally against her hand, someone did this unintentionally, etc. In the third and fourth cases we wouldn’t say that it was the woman who spilled the coffee. In the same way, we can say that in case the shooter got a push in his back it wasn’t he who killed the passer-by. If it was someone who killed the passer-by, it was the pusher, so the policeman. Right? Is this true even if the policeman wanted to prevent that the shooter would kill the intended victim? Maybe, we should say the policeman should have been more careful and that he can be blamed for that, but nevertheless we would say that it was the shooter who killed the passer-by. Even more, the shooter can be punished for having killed the passer-by. Why? For isn’t it so that the shooter didn’t intend to kill the passer-by, as we have seen, and wouldn’t have shot if he had known before that he would have been pushed in his back?Let’s compare again the coffee spiller and the shooter. The coffee spiller did not intend to spill the coffee, i.e. she did not intend to move her hand that way that she would spill the coffee. Moreover, she couldn’t expect that someone would push against his hand. So far, the cases of the shooter and the coffee spiller are analogous: both do what they do unintentionally, at least if we look at the consequences of what they do. The shooter, however, did have the intention to move his hand that way that he would shoot, albeit in a different manner. He hadn’t expected that someone would push his back, indeed, but the shooting as such was intentional, or, as I want to say, the deed of shooting was intentional. On the other hand, the spilling of the coffee was simply a movement of her hand for the spiller: It happened to her. The hand movement wasn’t a deed of hers The shooting, however, didn’t merely happen but was done by the shooter himself and someone who shoots knows that by shooting much can go wrong, like that there is a chance that he will miss. This shooting that made that a passer-by was killed was an intentional act, while the hand movement that led to spilling the coffee was not (for the coffee spiller). And, as Donald Davidson says: “a man is the agent of an act if what he does can be described under an aspect that makes it intentional.” (p. 46) Therefore, although in a sense we can say that the killing of the passer-by was not an action the shooter performed (for it wasn’t his intention to perform this action), it was something that the shooter did, while the spilling the coffee was not something that the coffee spiller did, since it merely happened to her. That is why the shooter is responsible for the consequences of his shooting while the coffee spiller isn’t. And maybe the punishment for the shooter will be even more severe for having killed an innocent man than if he had killed his intended victim. But that’s what the court decides. 

Source
Davidson, Donald, “Agency”, in
Essays on actions and events, Oxford: Clarendon Press, 1980; pp. 43-61.

Thursday, March 04, 2021

Random quote

We like to give a beautiful name to what belongs to us and to use mean words for what belongs to others. 

Pierre Bayle (1647-1706)

Monday, March 01, 2021

The Covid Paradox



When browsing the internet, I stumbled on a Covid paradox. Some examples:

1) “The World Health Organization’s Europe director Hans Kluge said Thursday the continent is in the midst of what he calls the COVID-19 ‘pandemic paradox,’ in which vaccine programs offer remarkable hope, while emerging variants present greater uncertainty and risk. … ‘This paradox, where communities sense an end is in sight with the vaccine, but at the same time are called to adhere to restrictive measures in the face of a new threat, is causing tension, angst, fatigue and confusion…,’ Kluge said.” (see note 1).
2) Another article, titled “The COVID Paradox” (see note 2), says that nobody should overstate the “pain and loss of this era in human history. … When organisational life is normal, patterns often continue because they existed before. When crisis happens, however, what was once intractable becomes open. Cultural norms can be analysed and adjusted. Leaders who did not have time or appetite for change demand new ways of thinking and working.” In other words, the pandemic doesn’t bring only misery, but, paradoxically, it creates also chances for new developments.
3) In another article titled “The Covid Paradox” (see note 3), the tenor is the same, although the accent is somewhat different: “While in the short run, one would arguably return to pre-Covid behavior patterns quite quickly, we are likely to see more fundamental changes play out in the long run. The long-term impact of Covid is likely to be far more significant than its immediate effect in the next year or two. Reactive change tends to feel significant, but is not necessarily durable, but the Covid experience will produce organic shifts in mindsets that will make themselves manifest over a much longer period of time. Covid will be transformative, but not in the way that it was imagined a few months ago.” 

Actually, we have different paradoxes here. The COVID-19 pandemic paradox in 1) says that the assumed solution of the pandemic doesn’t bring a solution. This sounds paradoxical, indeed, but I think that we cannot speak of a paradox here. The present vaccines help to stop only one strain of the coronavirus, but not possible new strains, so they solve only a part of the problem, and there is nothing paradoxical in this. We simply need better solutions, like improved vaccines, and then as yet the restrictions can be lifted.
Cases 2 and 3 have more the air of being paradoxes, and in a sense they are. On the other hand, they simply describe normal facts of life. The difference is that the scale of the pandemic is much larger. When the road to the left is blocked, we choose the road to the right. When in a supermarket the shelf with rice is empty, we buy millet or potatoes. Your attention is drawn to new options and maybe it leads to new behaviour. Nobody calls this paradoxical.

 Nevertheless, there is at least one a paradox that is relevant in this pandemic: the Sorites paradox. Sooner or later the number of coronavirus infections will go down, be it because the restrictions will be effective, or, what is more likely, be it because vaccinations will end the pandemic, or be it because the pandemic will end in a natural way. Then the question is: When can we say that the pandemic has ended? How many patients make the difference between a pandemic and a “normal” situation in which some people are ill and most are not, and in which the chance that the coronavirus will spread again has been minimalized? The Sorites paradox is about an analogous question: How many grains of sand make a heap? Or, formulated in a way that is more relevant to the present pandemic: How many grains must we remove from a heap of sand till it is no longer a heap? Remove one grain and you’ll still call it a heap. Take away another grain, and it is still a heap. But what, if you have removed, one by one, thousand grains? Or a million? Do we then still have a heap? At some point, the heap will not be a heap any longer, but how many grains must be removed until we have reached that point? Until now, nobody has given a convincing answer to this question. Actually such an answer doesn’t exist. It is a matter of subjective decision and definition.
In case of the present pandemic, we basically have the same question as in the Sorites paradox. When the number of Covid-19 patients goes down, finally we’ll not have a pandemic any longer, but when we’ll have reached that point? This question is important in order to determine when the restrictions can be lifted and to what extent, but in fact nobody knows the answer. Each country has its own ideas about it. It is just a matter of policy (making choices) and politics (the execution of choices); a matter of intelligent guesswork and of establishing safe standards. It’s better to stay on the safe side and to maintain the restrictions that must contain the pandemic too long than too short. But being overcautious in view of the pandemic can be dangerous in other respects, like for the mental health of the population and for its economic health (which in the end also affects the mental and physical health of a population). It will be a wise person who knows what to do.

Notes
1) https://www.voanews.com/covid-19-pandemic/who-europe-chief-says-region-midst-covid-19-pandemic-paradox
2) https://www.russellreynolds.com/newsroom/the-covid-paradox
3) https://timesofindia.indiatimes.com/blogs/Citycitybangbang/the-covid-paradox/ 

Monday, February 22, 2021

Bayle and Montaigne on torture

Detail of a memorial stone on the remains
of the gallows of Amerongen, Netherlands.

When Bayle wrote his treatise on tolerance, he seldom referred to other thinkers who had influenced him or whose ideas he used in order to substantiate his stand. However, there is one striking exception: Montaigne. After having read my last blog, I think you’ll not be surprised that Bayle fully rejected a practice that was common and also legal in his days and that was often used in juridical fact-finding examinations: torture. Torture was also used when (religious) authorities like the inquisition wanted to convert or punish heretics. Although generally Bayle is quite comprehensive in his argumentation, not so when he rejects torture. He simply says that torture often makes that accused confess crimes that they haven’t committed, and then he goes on: “Montaigne writes about this very wise: ….”, followed by a long quotation after the colon. Since torture is still practised, legally and illegally, I think that here, too, it is worthwhile to quote Montaigne in full as well:
“The putting men to the rack is a dangerous invention, and seems to be rather a trial of patience than of truth. Both he who has the fortitude to endure it conceals the truth, and he who has not: for why should pain sooner make me confess what really is, than force me to say what is not! And, on the contrary, if he who is not guilty of that whereof he is accused, has the courage to undergo those torments, why should not he who is guilty have the same, so fair a reward as life being in his prospect? … But when all is done, ’tis, in plain truth, a trial full of uncertainty and danger: what would not a man say, what would not a man do, to avoid so intolerable torments? ‘Pain will make even the innocent lie.’ Whence it comes to pass, that him whom the judge has racked that he may not die innocent, he makes him die both innocent and racked.” (Essays, Book II-5). And Bayle adds: “These are really the most terrible effects of the terrible pains that a man, whose limbs are violently stretched out, have to suffer.” (Tolerance II-2).
As said, in Montaigne’s days and a century later when Bayle lived, too, torture was an accepted practice in juridical examinations and also as a means of punishment. Then it was often executed in public. Since torturing is illegal nowadays, so hidden, in most countries, it is difficult for modern man to imagine how cruel it was. You can get an impression by visiting a torture museum or by googling a bit on the internet and looking for torture instruments. It’s unbelievable what kinds of cruel instruments man has developed through the ages (and actually still develops).
Montaigne was nearer to the execution of torture than Bayle. While Bayle was “not more than” a philosopher, for many years Montaigne has worked as a counsellor of the courts (“Parlements”) of Périgueux and Bordeaux. However, he was neither directly involved in this practice, nor has he ordered to torture someone. Montaigne was a kind of examining magistrate and his job was collecting information and evidence for lawsuits. He didn’t pass judgements himself. By the way, Montaigne was not against the death penalty (nor was Bayle), but then he wanted a short and simply execution.
In the essay just quoted (titled “Of conscience”) Montaigne didn’t only demonstrate that torture is pointless and senseless, in an example he also showed that it can be unjust:
“A country-woman, to a general of a very severe discipline, accused one of his soldiers that he had taken from her children the little soup meat she had left to nourish them withal, the army having consumed all the rest; but of this proof there was none. The general, after having cautioned the woman to take good heed to what she said, for that she would make herself guilty of a false accusation if she told a lie, and she persisting, he presently caused the soldier’s belly to be ripped up to clear the truth of the fact, and the woman was found to be right. An instructive sentence.”
But what if the country-woman did have lied and the soldier was innocent? It makes me think of another practice that was also not unusual in Montaigne’s time: Women accused of being witches were thrown in a lake. If she remained afloat, she was a witch, and was hanged as yet; if she sank, she was innocent. Too bad that she didn’t survive the test.
From the end of the 18th century on the legal practice of torturing almost disappeared. Also the number of offences punishable by death diminished a lot. While before a simple theft could be punished with the death, since then in most countries the death penalty can be imposed only yet for murder or for serious violation of the public order. In many countries, especially in Europe, torture and death penalty have become illegal, anyway. Nevertheless, illegal torture and illegal death penalties are still widely practised. The recent attempt to murder the Russian opposition leader Alexei Navalny is an example of a failed illegal execution. Or think of Belarus, where arrested demonstrators have been tortured simply because they were protesting against the illegal re-election of “their” president.
I think that everybody will agree that torture is cruel, for otherwise it would not be practised. It is practised just because it is cruel and people cannot withstand the suffering. However, just because of this – to quote Montaigne again –, “[a] thousand and a thousand have charged their own heads by false confessions … Are not you [then] unjust, that, not to kill him without cause, do worse than kill him?” But is not the cruelty already reason enough to stop the practice? 

Sources
- Montaigne’s Essays quoted from https://oll.libertyfund.org/title/hazlett-essays-of-montaigne-vol-4#lf0963-04_head_006
- Pierre Bayle: See blog last week.

Monday, February 15, 2021

Tolerance

Pierre Bayle

The modern idea of tolerance goes back to what Baruch de Spinoza and John Locke have written about it. Of course, they had their predecessors, such as Montaigne and the Dutch Renaissance scholar Dirck Volckertszoon Coornhert (1522-1590). Less known is Pierre Bayle’s contribution to the development of the idea.
Pierre Bayle (1647-1706) was a French Huguenot who had to flee from his country because of his religion. He lived many years in exile in Rotterdam in the Netherlands, where he also died. He was one of the leading scholars of the Early Enlightenment. One of his main works was Tolerance. A philosophical commentary (abbreviated title), published in 1686. In this book he developed one of the “three leading tolerance conceptions of his time”, so Buddeberg and Forst (p.21). The work must be seen against the background of the persecution of protestants in roman-catholic countries in the 17th century, especially in France. Before Bayle, Spinoza had already argued for an individual freedom of religion, although the state could order which religion would be publicly practiced within its territory. A few years after Bayle Locke argued for a separation of church and state. Religions should be allowed to organize themselves in voluntary societies and there should be freedom of conscience. The state could interfere only when a religion would question the authority of the state and when a denial of the existence of God would undermine the moral foundations of society. (ibid.) Bayle was more radical and pleaded for freedom of conscience anyway
Bayle’s Tolerance consists of two parts. In part 1 he rejects the arguments based on Augustine’s view that it is allowed to force non-believers with violence to accept the right Christian faith. In part 2 Bayle discusses possible replies to his arguments in part 1. One can summarize Bayle’s argumentation against violent non-tolerance as follows: Violent non-tolerance is either 1) hypocrite, or 2) it is senseless, or 3) it is counterproductive. Let me explain.
 

1) Bayle says, for instance: Assume your faith is a minority faith in the country where you live and you are persecuted by the defenders of the main religion (as was the case in the early days of Christianity in the Roman Empire, for example). Or you are sent as a missionary to China, but the Emperor of China chases you away by force. What would you say then? Indeed, your view would be that the rulers have no right to do so and you’ll detest what they do. But what right then do you have to persecute and kill others who don’t accept your faith, when you are the ruler of a country? It’s hypocrite to think that you are allowed to persecute non-believers in case you have the power to do so. You are not allowed to do to others what you don’t want that they do to you.
2) Persecuting those who don’t have your faith is not only hypocrite, often it is also senseless. Assume now that you are persecuted for having a minority religion. For example, you are a Huguenot in France at the end of the 17th century. You are not allowed to have public and many non-public functions any longer. Your possessions are robbed by the state. Many people with your religion are tortured because of their faith and you fear to be tortured, too. You can even be killed because of your faith. What will you do? It’s not unlikely that you’ll think: Let me pretend that I have given up my faith and let me feign that I have accepted the official religion. And so you do and from then on you go to the state church or temple and you do the prescribed rituals. But in your heart you still belief what you always believed. Your conversion is mere appearance. The tries to convert you by force have been senseless.
3) It is also possible, however, that the violent tries to convert non-believers fail and just make that they are strengthened in their belief. Doesn’t the Gospel say already that your faith will be badly received by the world? That’s what you are experiencing now, and you think that your salvation is in the hereafter, not in this world. If non-believers react in such a way, tries to convert them with violence are simply counterproductive. 

These are the main reasons why Bayle pleads for a complete tolerance of all religious views and for freedom of conscience. He wants a tolerance of different and dissenting religious views, anyhow. Actually, there is only one standard for what you believe: your conscience. For what else should decide which religion, view or opinion is true? Who will say which conviction is best? It is absurd to say that there is a criterion to decide this. If you think that such a criterion exists, what actually happens is that the strongest wins and that the arguments of the strongest are seen as best. Then being true and being the strongest are different words for the same. Or, as Bayle also says: We give a beautiful name to what is ours but hold in contempt what belongs to others.
Bayle aimed against religious intolerance, but his arguments are valid against all kinds of intolerance. It makes Bayle one of the founders of the modern idea of tolerance, together with Spinoza and Locke. Although now Bayle is less known than them, his view has certainly been as influential. 

Source
Pierre Bayle, Toleranz. Ein philosophischer Kommentar. Herausgegeben von Eva Buddeberg und Rainer Forst. Frankfurt/Main: Suhrkamp, 2016.
There are several English editions of Bayle’s Tolerance. Just google.

Monday, February 08, 2021

Fake News



Now that Trump has left the White House, an era of fake news has ended. Do you believe it? I don’t. Fake news is of all times. Millenarian movements are a case in point.
If you ask what fake news is, many people will have an answer. Nevertheless, I think that a correct reply to this question is not easy. It’s not simply so that fake news is a message or generally a fact that is said to be true although actually it is false. For example, if in the 13th century someone would have said that the earth revolves around the sun, probably he would have been accused of bringing fake news. Maybe he would even be sentenced to the stake for that view. At least in Europe this could happen. In the 17th century, however, there were already many people who believed that the statement was true, although in some countries it was still dangerous to say so (see what happened to Galileo). Now in the 21th century almost everybody thinks that the statement is true and it is safe to express this view. Nevertheless, people in the 13th century had good reasons to think that it’s not the earth that revolves around the sun, but that’s the other way round. Then (in Europe) the highest authority for truths was the Bible, and hardly anybody called in question what is written in this Holy Book. Today the highest authority is science. This shows that what fake news is, is not only an objective fact but also a social affair.
Here we see that what fake news is, is determined by two factors: Its truth and its relation with other statements that are considered to be true. However, both factors are problematic, for when do we call a statement true? There are several definitions of “true” (or “truth”). The most accepted definition says that a statement is true, if it corresponds to the facts, or, as others say, to the actual state of affairs. But how do we know what the facts are or what is actual? Here we have a problem. In short it is this: In order to know what a fact is, we must now what is true, and in order to know what is true, we must know what the facts are. The circle is round. You might try to solve the problem by developing reliable measures instruments, but even then the question remains when we consider instruments reliable. Another way to solve the problem is saying that the search for truth has a long history and that we must try to relate new statements to the already established facts in order to find out whether they are true. If a new statement coheres with the old truths, it’s probably also true, if it doesn’t it is apparently false. However, does it get us any further? For in medieval Europe what the Bible said was considered true, so seen that way the idea that the sun revolves around the earth was correct. Today we say that it is false.
Gradually I have come to discuss already the second factor just mentioned that determines whether a statement is fake or fact: Its relation with other statements that are considered to be true. This leads us to another theory of truth: A statement is true not if it corresponds to the facts, but if it properly fits with a system of coherent facts that have been accepted on reasonable grounds. However, also this approach is problematic, for also here the question arises what makes us accept a system of coherent facts. Also here the circle is round. Nevertheless, I think that there is an elegant solution of this circularity problem. Actually it combines both views on truth just discussed and it leads to a kind of spiral idea of truth: Karl R. Poppers idea of error elimination. Represented schematically it goes this way:
P1 > T1 > E > T2 > P2.
P1 is a problem we want to discuss. Then we form an idea (statement, theory) how things might be arranged (T1). Next we test the idea with what we consider reliable means, such as an experiment, in order to judge whether it holds. If not, we eliminate the idea as being false. If it holds, we add it to the existing stock of knowledge, which leads to an improved or extended theory (T2). Our knowledge has reached a higher level, so to speak. It has spiralled upward. Now the process can start anew with P2.
So far, so good. For this works for science where we have time, means and money to test statements, but not in daily life where these sources are often scarce. Even so, I think that what I have written here can be used as a guideline to judge what is fake and what is fact, if critically applied. The central questions are: What is the established stock of facts and what does a new view or statement add to it? Can it be fit into the established stock? Does it undermine established facts, and if so do we have reasons to belief that it undermines these facts on good grounds? Actually, this is the only thing the average citizen can do but it is at least something she or he can do in order to distinguish fact from fake. 

Sources
- Karl R. Popper, Objective Knowledge. Oxford: Clarendon Press, 1979; p. 164.
- My blog “
Why it is good to make a bad plan”, dated 13 July 2015. 

Monday, February 01, 2021

What’s in a name?


Would Hillary Clinton have beaten Donald Trump in the American presidential elections in 2016, if she would have had a less round shaped face? This is what studies on the Kiki-Bouba effect discussed in my blog last week implicitly suggest. For when I searched for information on the Kiki-Bouba effect for that blog, I saw that it is not an isolated phenomenon. It is an effect with social implications. This became clear to me, when I read a research article by David N. Barton and Jamin Halberstadt. It will be the main source for my present blog (see Source below).
Barton and Halberstadt wondered whether there is a connection between a personal name and the kind of face that you think that belongs to that name, or rather what kind of shape the face of a person named so and so should have. Or, the other way round, what kinds of name would best fit persons with round faces or persons with angular faces. In short, the researchers wanted to know whether there is a kind of Kiki-Bouba effect between name and face shape. So should Bob have a round face and Kirk have an angular face?
In order to investigate this question, Barton and Halberstadt selected names that require rounding of the mouth to pronounce like Paul, Bob and George and names that require a more angular shaping of the mouth like Pete, Kirk and Mickey, and they took differently shaped faces (rounded or angular; the researchers used only male names and male faces). Then they combined arbitrarily names and faces or had test persons make name-face combinations. The researchers did several tests such as asking test persons to rank order names in terms of their suitability for ten rounded and ten angular male face caricatures, when the name-face combinations were given; to give the selected names to the faces; etc. The result of the tests was unequivocal and significant: Rounded names should belong to rounded faces and angular names should belong to angular faces. A man with a rounded face should have a name like Paul, Bob or George, and if he had an angular face a good name for him would be Pete, Kirk or Mickey. It is likely that the relationship exists also for female names and female faces.
In view of this result, I am glad that my given name fits the shape of my face. For the connection between name and face is not trivial, but it has social implications. The test persons didn’t only think that name and face should fit, but they also preferred persons with preferred name-face combinations. They liked them more, slightly but consistently. Therefore, Barton and Halberstadt investigated also the political consequences of the name-face relationship. They refer to an article by G. Friedman (2015) that shows “that candidates with extremely well-fitting names won their seats by a larger margin – 10 points – than obtains in most American presidential races, [which] suggests the provocative idea that the relation between perceptual and bodily experience could be a potent source of bias in some circumstances.” Research by the authors themselves shows that “well-named” political candidates who ran for the U.S. Senate between 2000 and 2008 inclusive had an advantage over those with non-congruent names by earning a greater proportion of votes.
In view of all this, Barton and Halberstadt conclude: “People’s names … are not entirely arbitrary labels. Face shapes produce expectations about the names that should denote them, and violations of those expectations carry affective implications, which in turn feed into more complex social judgments, including voting decisions.” Therefore, it is not too bold to say that, if Mrs. Clinton’s given name wouldn’t have been Hillary, but, say, Rose, she would have won the American presidential elections in 2016. So, don’t say anymore “What’s in a name?” 

Source
Barton, David N.; Jamin Halberstadt, “A social Bouba/Kiki effect: A bias for people whose names match their faces”, in Psychonomic Bulletin & Review, vol. 25 (2018), pp. 1013–1020. Also on website https://link.springer.com/article/10.3758/s13423-017-1304-x 

Monday, January 25, 2021

The Kiki-Bouba effect


Look at the picture above. Which one of the two figures would you call Kiki and which one would you call Bouba? I guess that you’ll say that the left one with jagged shapes is Kiki and the right one with round shapes is Bouba. If so, you are not alone. More than 95% of all people who were asked this question gave the same answer. Moreover, it’s an intercultural phenomenon and it’s also independent of age. The Kiki-Bouba effect prevails even in cultures with no written language and among pre-reader children. Apparently, the relation between the sound of a word and the image it evokes (and the other way round) is to a high extent innate in man. It belongs to man’s nature. On purpose I write “to a high extent”, for, as we saw already, not 100% of all who were asked the question called the jagged-shaped figure Kiki, and for some groups, like autists, and for some cultures the connection is weaker, though it still exists. Nevertheless, we can call the Kiki-Bouba effect a natural phenomenon.
The effect was first discovered in 1929 by the German-American psychologist Wolfgang Köhler, but it has especially been investigated since Vilayanur S. Ramachandran and Edward M. Hubbard repeated Köhler’s experiment in 2001. They introduced also the nonsense words Kiki and Bouba. The effect exists also outside language. For instance, we call some music romantic and other music wild. In music the effect is used for evoking certain feelings or emotions.
Linguists always thought that the relation between the sound of a word and its meaning is arbitrary. The Kiki-Bouba effect shows that as such this view is not right. Of course, when we say “table”, it’s not so that it is inherent to the word that we are talking about a certain piece of furniture with four legs. But the opposite would be that a relationship between language and meaning is completely absent, and that’s not true either. I have shown already that such a relationship exists in my blog “How we think, at least initially”, dated 18 November 2013. The Kiki-Bouba effect is another instance of this relationship.
There are several explanations for the phenomenon. One is quite interesting in view of the relation between language and the way we think: Ideasthesia. Ideasthesia is the term for the neuroscientific phenomenon in which activations of concepts evoke perception-like sensory experiences. Synesthesia is a well-known example. In the case of synesthesia people have sensory responses in reaction to external stimuli. For instance, they see colours, when they hear music. Ideasthesia is broader and involves also the cognitive and semantic aspects of the stimulus: The relationship involved is dependent on the meaning of the stimulus. In the relationship between music and colours, for instance, meaning doesn’t play a part; the phenomenon just happens. In the Kiki-Bouba effect, however, meanings are essential. If you think of a sharp sound you can think of a knife, because a knife is also sharp, albeit in another sense. In the same way there is a relation between Kiki and jagged-shaped and between Bouba and round-shaped. In view of the idea that there is a connection between language and thought this is quite interesting. The Kiki-Bouba effect is an instance of the way we think. It says something about how we are constituted. It’s an example of how in a sense a concept determines the way we look at the world and see reality. Language influences our thinking about the world, although it goes in two directions, of course, for what we see influences also how we speak about it: Whether something is Kiki or whether it is Bouba. 

Some sources and literature
- “Bouba/kiki effect”, in Wikipedia, https://en.wikipedia.org/wiki/Bouba/kiki_effect
- Peiffer-Smadja, Nathan; Laurent Cohen, “The cerebral bases of the bouba-kiki effect”, on https://www.sciencedirect.com/science/article/pii/S1053811918321141
- Ramachandran, V.S,; E.M. Hubbard, “Synaesthesia – a window into perception, thought and language.” Journal of  Consciousness Studies, 2001/8: 3-34.
- “What is the Kiki-Bouba test?”, on https://brainstuff.org/blog/what-is-the-kiki-bouba-test
- Shukla, Aditya, “The Kiki Bouba effect – research overview & explanation”, on https://cognitiontoday.com/the-kiki-bouba-effect-research-overview-explanation/#The_Kiki-Bouba_effect