Monday, August 24, 2015

Space and time in society


Actually the intrusion of the private into the public, but also of the public into the private, is remarkable. Not the fact that it happens but it points to some interesting aspects of social life. As a sociologist I am used to think about society in terms of social relationships, so in terms of the way we connect with others and what these connections mean to us. However, actually society is not only a matter of relations but the way social life takes place has also something to do with where relationships are entered into. Moreover, social relationships are also temporal in some way. Now I am the first to deny that this idea is something new. There are other sociologists and philosophers who have written about it before. Nevertheless, the aspects of place and time are often ignored when society and social relations are studied and that’s why I want to talk about it here.
Take for instance the separation of the private and the public discussed in my blog last week. It is not without reason that Žižek and others don’t talk simply of the separation of both spheres of life as such but that they talk of the separation public space and private space. In other words, the private and the public are spheres that are not only characterized by distinctive manners but also by the geographical areas where they take place. The private is typically the sphere of life at home where the walls of your house protect you against the look of others, while the public is typically the sphere of life in the street, where everybody can see what you do. Therefore it’s not weird to say that the private intrudes the public or the other way round, for it is a bit the same as if a burglar breaks into your house: boundaries are exceeded (in fact, that is what happens when someone talks too loud when calling in a train and others feel disturbed).
Usually it takes some time to go from the private to the public space: You have to open the front door of your house and maybe walk through your front garden, before you are really in the public space. In the front garden you are still on your private property but nevertheless you cannot do there everything you like (by law, you are not allowed to go naked there, though you are allowed to do so in your back garden, especially when there is a wall around it).
The separation in space and time is not only characteristic of the spheres of the private and the public. For example, we meet some friends only in the sports field and maybe we would never get the idea to invite them for a birthday party, even if we feel very close to them when we go along with them as team mates. As soon as we leave the stadium, each of us goes his own way. And it is the same for the people we meet at the work place: Our colleagues are usually not our friends and, even if they are, during the working hours at the workplace we treat our friends not as such (which may be a reason for conflicts between us, however).
In his An ethnologist in the metro Marc Augé says about the same: “In order to go from one activity to another one needs time and space.” This is what we use the underground for and “when we change our activities at certain hours, we change also our locations. These changes of activity are not simply technical changes; they can go together with real role changes, for example when they go together with a passage from the life that we call professional to the life that we call private. The contrast private life / professional life as such does not comprise all kinds of changes of activity: there are forms of life that are more or less public that are not professional – it happens that one goes, alone or with friends, to public places in order to relax; that one goes to the stadium; to a parade; to a display of fireworks; to the theater; or to the cinema – and multiple forms of private life, official or secret, with family or alone, juridical or religious ...” (2013, 95-96; translated from the French edition)
Many of our activities based on social relationships are place bound and time bound, or at least in modern contemporaneous society they are, which can make our life rather compartmentalized.

Monday, August 17, 2015

Public and private


Recently I have read Event. Philosophy in Transit by Slavoj Žižek (Penguin Books, London etc. , 2014). I have some doubts about the book, but I’ll not write a review. Here I want to limit myself to discussing a passage that casts an interesting light on modern society. In this passage Žižek points to the changing status of public space: “ ‘[The] street is an intensively private place and seemingly the words public and private make no sense.’ ... [B]eing in a public space does not entail only being together with other unknown people – in moving among them, I am still within my private space, engaged in no interaction with or recognition of them. In order to count as public, the space of my co-existence and interaction with others (or the lack of it) has to be covered by security cameras.” (p. 176; the first sentence is a quotation from the Chinese People’s Daily)
According to Žižek the public space is becoming smaller while the private space is growing: Actions performed only at home in the past, or in places where they couldn’t be observed by others now often take place also “in the street” without the feeling of any shame that everybody can see them. Indeed, I can remember that when I was a child, kissing in public between lovers “was not done”. Now nobody cares. Today, it even happens sometimes, so Žižek, that fully erotic games take place in “heavily public places” like beaches, trains, railway stations, shopping malls, and the like, and most people passing by do as if they don’t see it. In other words, the private intrudes the public. People check themselves only when surveillance cameras are present, and this is not so, I think – Žižek doesn’t explain it – because people can be seen, for in public spaces people can always be seen, but it is because they can be punished for what they do. Only Big Brother can make that people behave themselves, or so it seems.
How about the private space? Does it become larger, because it simply absorbs parts of the public space? This would fit into the modern trend of increasing individualism. Žižek seems to think it does: “It is often said that today, with our total exposure to the media, culture of public confessions and instruments of digital control, private space is disappearing. One should counter this commonplace with the opposite claim: it is the public space proper which is disappearing. The person who displays on the web his naked images or intimate data and obscene dreams is not an exhibitionist: exhibitionists intrude into the public space, while those who post their naked images on the web remain in their private space and are just expanding it to include others” (pp. 178-9; italics Žižek).
Although this is true as such, I doubt whether it is only the private space that extends at the cost of the public space. It’s not a development only in one direction. For why else, for instance, are we advised to cover the webcam of our laptops or PCs? Just because otherwise our private actions can become public we are said to do so. Or take the activities of the secret services that try to find out what government leaders do, but also the laws that prescribe that data once considered private, like e-mail data, calling behaviour and data on other activities you do via the  modern media are collected and stored. I can see this only as an intrusion of the public into the private, and so do national committees that have been established by governments (!) for protecting the private. And once people become aware that their private behaviour can be seen by public agencies, albeit secret public agencies, it’s quite well possible that they are going to behave accordingly, so that they restrain themselves in what they say and do on line (like people in a dictatorship do).
What we see here then is both an extension of the private at the cost of the public and an extension of the public at the cost of the private. The development is not one-sided, as Žižek seems to suggest. Even more, I think that the idea that there is a distinction between the public and the private is at stake. Rather than that one sphere of society intrudes the other, or that one (the private) expands itself at the cost of the other, maybe it will be so that the separation of the private and the public will fade away and that both will mingle so that we’ll gradually get one single common sphere with more public and more private corners at most. Will it be worrying? Given our present way of life it will. Nevertheless, such a mixture of spheres is not new. It’s what you find in small isolated societies and, I guess, what you found in “primitive” prehistoric societies, so in societies where more or less direct relations prevailed. But just that is a reason to be worried, for nowadays we do not live any longer in such small-scale societies but in mass societies. Just in mass societies, in which direct relations are mainly absent, keeping the two spheres apart is important for protecting us against the arbitrariness of Big Brother and our fellow man.

Monday, August 10, 2015

On commemorating

Monument for the victims of the terror attack in Bodø, Norway

The day I arrived in Trondheim Norway commemorated the terror attacks of July 22, 2011, when 77 people were murdered. Exactly four years ago I was travelling somewhere north of Oslo. Since I avoid the news during my holidays, and also because my knowledge of Norwegian is only basic, it took some time before I knew what had happened. It came as a shock. Now I had been travelling around in the country again and one of the things I noticed were the local monuments remembering the calamity.
Commemorating impressive events of life with monuments, especially when there have been many victims or when these events have changed history in a significant way, is a normal aspect of life. Maybe some readers know that I make pictures of monuments and sites related to the First World War, which I publish on my main website (see http://www.bijdeweg.nl/WO1-Inleiding.htm). A century after its end commemorations are still held. The more so this happens for more recent events like the Second World War, 9/11, or the shot down of the Malaysian airliner of flight MH 17 last year in the Eastern Ukraine. Remembering seems to be a basic act of life, not only for individuals but also for whole societies. Has it always been so?
My answer to this question can only be a try and please correct me if you know more about it. Anyway, I have a strong impression that it has to do something with our view on history and the way we place facts and events in life – as individuals and as society. And so I think that it is a relatively recent phenomenon. I don’t want to say that remembering the victims of a war, a violent event or a tragic incident did not happen long ago but then it had always been private or limited to small circles of people. Through the years, I have seen many monuments for the First World War – of course – and for the Second World War as well. You find them everywhere in this part of Europe and a lot of them outside this region, too. I have also seen monuments for other wars and human catastrophes like the Franco-Prussian War (1870-1871), or for victims of traffic accidents along roads, and so on. What strikes me is that they date from the middle of the 19th century or thereafter. Of course, there are many older monuments, like the Roman triumphal arches, but these monuments do not remember victims but victories in war. Or you find religious crosses on crossroads or chapels that have been built on memorable sites – and I think that it is the same so in non-Christian countries and regions – but they are anonymous in the sense that a casual passer-by does not know what happened there. They remember only for those who know what happened and they don’t tell you about it and just that monuments invite the passer-by to stop and to read what occurred and that they remember who died – indeed, “modern” war monuments are often full of names – is a phenomenon of relatively modern history, I think.
Why just now? As I see it, it has to do with a new idea of history called “historicism”, which developed in the 19th century and stressed the significance of the context in which things happen, and also with the rise of psychology at the same time, which stresses the importance of remembering and contending with traumas for a balanced inner life. It is not that such scholarly ideas explicitly made us build monuments but they stand for a new view on society and the way we deal with what happened to us. Once I thought that many monuments – especially monuments remembering wars – were only an expression of nationalism, so just the kind of feeling that also caused the wars that such monuments were erected for. Later I learned that nationalism is only one aspect of such monuments and often it is only a minor aspect. For monuments, like so many symbols, express especially an inner emotion and they try to summarize what many people feel and want to share with others.

Monday, July 20, 2015

Keep it simple


Descartes’ Rules for the Direction of the Mind (see my blog dated June 22, 2015) gives not only the basic rules for a methodic approach of scientific problems. It contains also a number of statements that have a wider meaning; statements that have sense in the daily contact of men with each other. Some seem obvious. Nevertheless we often forget to apply them. For example, in Rule IX Descartes tells us that people are often more impressed by difficult high-flown far-fetched reasonings that they don’t completely understand than by simple transparent arguments. Knowledge, so Descartes, must not be deduced from what looks important and obscure but from what is easy and common. Isn’t it so that – my instance – a politician that uses bombastic language without content and not founded on the facts tends to have more followers than one who says the truth in a clear way?
Descartes’ words made me think of what is called Occam’s razor. Occam (or Ockham) himself didn’t use the word “razor” for his principle and he formulated it also in different words than we do today. He was a Franciscan friar who lived from about 1287-1347. The maxim that made him famous was in his words “It is vain to do with more what can be done with fewer”. Today this is read as “Entities are not to be multiplied without necessity”. For example, take the reasoning (1) “All men are mortal”- (2) “Philosophers are men” - (3) “Socrates is a philosopher” - (4) “So Socrates is mortal”. This reasoning contains the entity “philosopher”, which is superfluous here, for if we would define “philosopher”, we would get something like “a man who studies fundamental problems”. Fill in the definition in our syllogism and you’ll see that the entity “philosopher” is superfluous in this explanation why Socrates is mortal.
Sometimes Occam’s razor is considered meaning “Say it as simple as possible”. This interpretation is not correct, for arguing from several entities can be more brain breaking than a single statement with one or two entities that comprises a lot. Aristotle thought that bodies like stones fall on the ground because it’s there that their “natural place” is. However, reality appeared to be more complex and now we use complicated Newtonian suppositions and formulas for explaning gravity or, even better, Einstein’s theory of general relativity, even though Aristotle’s view was simpler.
Occam’s razor has a long history. Actually Occam was not the first one who formulated the principle. Once clearly formulated by him it had a big influence. Many scientists applied it and many philosophers referred to it. Wittgenstein, one of my favourite philosophers, said it this way: “If a sign is not necessary then it is meaningless” (Tractatus logico-philosophicus: 3.328). Or later “Occam’s Razor ... says that unnecessary elements in a symbolism mean nothing.” (5.47321).
Things that we first thought to be simple can be quite complicated but Occam’s razor helps us avoid unnecessary complications. It’s not completely harmless, as we have seen, and take care of the pitfall of oversimplification of Occam’s razor, but nevertheless as a rule of thumb you can start with the idea to keep it as simple as you can and then look what it brings. It helps prevent that you’ll be deceived by people who want to impress with an air of erudition and scholarship. For as Descartes warned us in Rule XII: Learned people are often so ingenious that they find a way to be blind even in matters that are clear as such and that every simple mind understands.

Monday, July 13, 2015

Why it is good to make a bad plan.


I finished my last blog saying that with his definition of “person” Locke gave a lead of departure for future discussions on the concept. We call such a lead also a “handle”. Famous critics of Locke were Joseph Butler (1736) and Thomas Reid (1785), but the discussion still goes on today. It shows how important a good handle is for starting a discussion and making progress, for what should we talk about if we have nothing to talk about? We should first have to invent a theme and next we should have to give it contents, too. For instance, we can decide to talk about “man” as the ancient Greek philosophers did. But then? We have only something to discuss if we fill in the idea of man, so if we define it. That’s what Plato did when he described man as a featherless biped. Now Diogenes of Sinope had a handle to criticize Plato’s definition, which he did by bringing Plato a plucked chicken: Plato’s “man”. As a result Plato changed his definition to “Man is an upright, featherless biped with broad, flat nails”. And so the discussion on man begun.
Although this is a funny anecdote, it shows in a nutshell what science is: making theories, testing theories in an experimental way, improving theories. Although many people think that science starts with the second, so with experimental research or at least with observing, this is not true. The idea is the first of these three steps in science, for without ideas there is nothing to start with and there is nothing to investigate. People who think that they just look and then start to develop ideas conceive themselves. Their ideas are simply implicit and not explicitly worded.
In a scheme it goes this way:
P1 > T1 > E > T2 > P2
P(1) is a question or theme we want to discuss, or something like that, also called the problem. For example: “What is man?”. Then we form an idea how things might be arranged, a kind of theory, like “Man is a featherless biped” (T1). Is it true? We can try to find it out by discussing about it and doing tests and experiments (E). If we are successful, we can formulate a better theory (T2). But often we are not fully satisfied with our solution or improved theory. Then we get new questions, new themes to talk about, etc. (P2). And so our knowledge evolves.
A scheme for the evolution of knowledge has also been developed by Karl R. Popper:
P1 > TT > EE > P2
Again, P1 is the problem we start with. TT means “tentative theory”, so the way we guess that the things we are interested in might be arranged. EE refers to the tests and investigations of our tentative ideas. Actually Popper calls this phase “error elimination”. When we have finished the EE phase, however, we are never completely contented with our result, so we get a new problem situation, which Popper calls P2.
Is Poppers scheme for the evolution of knowledge right? In a certain sense it is, if we suppose that my T2 is the conclusion of Poppers EE and that it is included in it: From a philosophical or scientific point of view a solution of a problem is never completely satisfactory, for we can always ask new questions. Therefore no solution is free of problems. Nevertheless I think that it is better to formulate T2 explicitly like in my scheme, for in practice it is often so that we stop once we have formulated T2, even in case we are not completely satisfied. There is nothing against doing so, for how should it be different in many cases? In practice we need to act! We cannot continuously go on evaluating, discussing, thinking about the best solution, as if we live in an ivory tower. We simply have to do something.
This makes me think of something I learned already as a child. I liked playing chess and in order to improve my level I studied chess books. Then I learned from the great chess player and chess theoretician Aron Nimzowitsch that a bad plan is better than no plan. I never forgot it and I still apply it. For if we have no plan, we don’t know where to start and we keep erring, but also a bad plan gives us a point of departure. Even if our first steps lead to nothing, we have a frame for evaluating our mistakes and improving our design. How this works tells us a scheme for the evolution of knowledge, like mine or like Popper’s.

Source: Karl R. Popper, Objective Knowledge. Oxford: Clarendon Press, 1979; p. 164.
op website 13 juli 2015

Monday, July 06, 2015

Making up for an omission


John Locke made the idea of consciousness the heart of his theory of man. He was the first who developed a thorough theory of consciousness. That’s why I called him the father of consciousness theories in my last blog, although he didn’t invent the concept. Many theories of consciousness followed since then. Some such theories, which often refer explicitly to Locke, discuss the question what a person is, since Locke was also the first philosopher who defined the concept of person. I, too, have written about this subject, in blogs and in articles. What I never did, however, was quoting Locke’s definition of “person”. I don’t know why not. Maybe it was because in my writings I referred mainly to the present discussion on the theme and I referred to Locke only by way of background information. However, in view of my present blogs I think that it is a good idea to make up for my omission here, just because Locke’s definition shows so well how important the idea of consciousness is in his approach. So here he goes: A person is, so Locke,
a thinking intelligent being, that has reason and reflection, and can consider itself as itself, the same thinking thing, in different times and places; which it does only by that consciousness which is inseparable from thinking, and, as it seems to me, essential to it: it being impossible for any one to perceive without perceiving that he does perceive. When we see, hear, smell, taste, feel, meditate, or will anything, we know that we do so. ... since consciousness always accompanies thinking, and it is that which makes every one to be what he calls self, and thereby distinguishes himself from all other thinking things, in this alone consists personal identity, i.e. the sameness of a rational being: and as far as this consciousness can be extended backwards to any past action or thought, so far reaches the identity of that person; it is the same self now it was then; and it is by the same self with this present one that now reflects on it, that that action was done.” (from ch. XXVII “Of Identity and Diversity” in John Locke An Essay concerning Human Understanding: http://www.uvm.edu/~lderosse/courses/intro/locke_essay.pdf)
I have quoted a bit more than only the definition of “person” for showing how important “consciousness” for Locke is. Since it is an inner perception, as we have seen in my last blog, consciousness in Locke’s sense is especially self-consciousness.
Here I shall not examine how progressive the centrality of the idea of consciousness in Locke’s philosophy was in his days. I think that it led to many steps forward in philosophy and science. But viewed from the present, it made also that some actually important aspects of what a person is were considered irrelevant. In making the mind the core of the idea of a person the importance of the body is refuted. Elsewhere (also in my blogs) I have shown why this is not correct. Moreover, by stressing that the span of identity of a certain person is related to what this person is aware of back from the present to the past the importance of unconscious processes for what makes up a person is taken no attention of. But also what happens unconsciously within a person makes up his or her personality for a part. It is even so that we often consciously push some of our possible reactions to the unconscious inner space, where it is then present as if it were in a storage room: We call such an activity learning or training. And isn’t it so that we often keep a person responsible for what s/he unconsciously did or, which are marginal cases, what s/he did in an automatic reaction or in an inattentive way? One can be held responsible for a deed just because one let run what one in an unconscious – so “automatic” – reaction did.
Be it as it is, with his definition of “person” Locke put on a discussion that lasted for centuries and that still hasn’t ended. That’s the merit of his definition: Without a lead of departure, there is nothing to discuss about and nothing to investigate. Locke gives us such a lead, in an intelligent way, that still inspires a lot of people to think.

Monday, June 29, 2015

Locke's tremendous idea


According to the Encyclopaedia Britannica, John Locke defined consciousness as “the perception of what passes in a man’s own mind.” I suppose that it is true that Locke said so, although I cannot check it, for there is no reference added to the quotation, which actually is to be expected in a work of that standing. Anyway, the passage is not from the famous chapter XXVII “Of Identity and Diversity” in Locke’s An Essay concerning Human Understanding (first published in 1689, but this chapter was added in 1694). Here Locke develops the idea of personal identity and links it to the idea of consciousness. For instance, in §19 Locke says that “personal Identity consists, not in the Identity of Substance, but ... in the Identity of consciousness ...” The idea of consciousness was not an invention of Locke. Already Plato and Aristotle formulated theories on consciousness and the English word “consciousness” existed already more than a century before Locke wrote his Essay. However, just as we can call Descartes the father of epistemology because he first systematized scientific methodology (see my blog last week), we can call Locke the father of consciousness theories because he first gave the concept a full place in philosophy and science.
As my quotation from the chapter on identity and diversity in the Essay illustrates, for Locke consciousness and substance – so mind and body, as we would say now – were two different things. In this respect Locke’s approach of consciousness was Cartesian. So for Locke it was basically possible that “the soul of a prince, carrying with it the consciousness of the prince’s past life, enter and inform the body of a cobbler” (§15 in chapter XXVII of the Essay), for the bodily characteristics of the prince were not part of his personality. We still find this separation between mind (or consciousness) and body in the modern discussion on personal identity, from Bernard Williams in “The self and the future” (Philosophical Review 79/2: 161-180) till Derek Parsons in Reasons and Persons (1984) and thereafter, and the so-called psychological-continuity theories of personal identity still form the mainstream view on personal identity, despite alternative views of, for instance, John Olson (The human animal (1997)) and myself (see http://www.bijdeweg.nl/PersonalIdentity.htm). Only now it becomes more and more accepted that substance and consciousness in man, so mind and body, are fully integrated. For some this means that man is nothing but a body or that man is a kind of biological machine, or how they see it; anyway that man is a completely material being and that the mind is a kind of epiphenomenal effect emerging from the human matter. Others, like me, prefer a dual aspect view on man, which says that man can be considered in different ways: as a biological body or as a conscious and thinking mind, although in the end man is both together. I think that this view makes it also easier to understand how in a certain sense man can survive his or her material dead. With this remark I do not mean that man can survive in any religious sense, for example as a soul, but the idea that mind as one of the two aspects of man makes it possible to understand how culture can survive the bearers of a certain culture; how ideas can remain to exist and have influence long after the thinker of these same ideas who has written them down in books or on the Internet has passed away. But maybe this is not as anti-Lockean as it seems on the face of it, for didn’t Locke say in the §15 just quoted that “The body, as well as the soul, goes to the making of a man” and that the cobbler who would receive the soul of a prince still “would be the same cobbler to every one besides himself”?

Monday, June 22, 2015

Descartes' tremendous idea


Science is a modern idea. In my last blog I wrote that Montaigne was an essayist and a writer. He was also a keen observer. By writing down his observations, Montaigne broadened our view on ourselves and environment and our self-insight. But Montaigne was not a scientist; he was not an investigator. In his time the idea of science was yet developing and by his view that everything can be doubted Montaigne contributed to its development. His adage was “What do I know?”, which would later find expression in the doubt that Descartes used for laying the foundations of the ideas of knowledge and consciousness with his famous words “I think so I am”. The idea of consciousness was fully developed by John Locke, but we can see René Descartes as the father of epistemology.
Descartes blamed many researchers of his time for not working systematically. He reproached them that there was no line in the way they worked. But then, so Descartes, it is impossible to get at the truth. What we need is a method: certain and easy rules that lead us to true knowledge. Moreover, Descartes was not satisfied with the old syllogistic logic of Aristotle and the medieval scholastic logic. It’s so that they help systemize existing knowledge and that they are useful in helping explain arguments to other people, but they are not useful in getting new knowledge. For getting new knowledge we need something else: A research methodology. Therefore Descartes wrote his Rules for the Direction of the Mind. However, this work, written in 1628 or just thereafter, was not published before 1684, so after his death. And the first publication was not in the original Latin but it was a Dutch translation. The first Latin edition came out in 1701. This work and other ideas on methodology made Descartes the founder of epistemology.
These Rules and generally Descartes’ approach of science gave us not only a new way of investigating nature, including man, but it gave us also a new view on knowledge. Or rather, it lead not only to a new view on knowledge but it changed the whole idea of knowledge, because we got a new way to experience what is around us. Before Descartes, from Aristotle till the Middle Ages, those experiences were considered knowledge that could be fit in a coherent way in what we already knew. New experiences had to be fitted in frames accepted by tradition. But from Descartes on only those experiences were considered knowledge that could be justified by the right method. Knowledge became what stands the tests of science. Four centuries later Karl R. Popper would sharpen the question what knowledge is: what we think to know has always to be formulated that way that we can test it. Montaigne and Descartes introduced the relation between doubt and knowledge. Popper made doubt a part of knowledge.
Descartes did not go that far. He believed yet that absolute certain knowledge is possible. It was only a matter of time to get it. But what he did do was founding knowledge no longer on experiences, so on what we think to see and hear as such, but on method, so on the way we see and think. Already this was a tremendous idea. It was a new idea, an idea that would lead to a new world: the world we live in today.

This blog is based on an unpublished manuscript by me, titled Science as Method (1988).

Monday, June 15, 2015

What everybody knows


In his essay “Of virtue” (Essays II-29) Montaigne writes about the case of a Turkish lord who in vain tried to shoot a hare. Also his dogs didn’t succeed to catch the animal. Therefore the lord concluded that the hare had been protected by his fate. This made Montaigne remark: “This story may serve ... to let us see how flexible our reason is to all sorts of images.”
A few years ago I wrote a blog about Festinger’s theory of cognitive dissonance, which says that when there is a gap between what we believe and what actually is the case we try to adapt the facts to our believes (see my blog dated Dec. 31, 2012). In Montaigne’s example the Turkish lord was so convinced of his own qualities and the qualities of his dogs that he couldn’t imagine that he failed. Something different must have been the case so that he could maintain his belief in himself and his dogs: There was a higher power that protected the hare. The much simpler explanation that he wasn’t a good hunter couldn’t be true in his eyes. It’s a clear instance of the reduction of cognitive dissonance in the sense of the theory of Festinger.
So far, so good. However, I wrote – which is generally accepted – that it was Festinger with his team who first formulated the theory of cognitive dissonance, but now we see that four centuries before Montaigne expressed already the same idea. Must we say now that Festinger and his co-workers didn’t invent this theory but that it was Montaigne who did, even though he didn’t call it that way? I think that there are arguments to say so, but that we can better stick to the opinion that Festinger & Co. are the inventors.
When I studied sociology long ago, many people said to me: A sociologist investigates what everybody already knows. It is a common opinion but it is easy to show that it’s nonsense. Nonetheless, there is some truth in it. Often, sociologists do investigate what “everybody” already knows, but it is not so that everybody knows that “everybody” knows (see note). Or some facts are only known to certain groups but the policy makers don’t know it or, if they do, they don’t believe them. Then it’s useful that social scientists investigate the matter. Do teachers really make such long hours as they say? Well, let’s investigate it and compare it with the work load of other of other employees. Or, what is often heard: “All foreigners are criminals – with the exception of my neighbour” (forgetting that once you have passed the border of your country you yourself are also a foreigner). So let’s investigate it and show that this prejudice simply isn’t true. By the way, it can happen that prejudices are true, for – as Hans-Georg Gadamer explained – a prejudice actually is nothing but an opinion that is not well established by the facts; but it can exist because we don’t know the facts or don’t have them at hand. It’s true, in practice prejudices are often unreasonable, biased opinions, dislikes and so on, but then it’s just the challenge for investigators to demonstrate that – or to topple their own prejudices.
Be it is it may, Montaigne was not a systematic investigator. Even more, in his days systematic research in the modern sense did not yet insist but the idea was yet under construction so to speak, to which he in fact also contributed, for example by his view on “doubt”. Montaigne was an essayist and writer. He was a keen observer who wrote down what he saw and thought. Investigating opinions, views, ideas etc, – called “hypotheses” in the scientific jargon – in a systematic and methodological way and testing the truth of them is what investigators do and what Montaigne did not do in his Essays. Therefore maybe we can say that Montaigne was the inventor of the idea of cognitive dissonance – if he was – but not the inventor of the theory. It was Festinger with his team who was the latter. Generally it is so that there are many good and useful ideas in society but often it’s uncertain what the truth in them is, even if they appear to be useful. That’s what we science need for. But perhaps the present blog is only a case of cognitive dissonance reduction that I wrote for confirming my own prejudice.

Note: If I remember well, Anthony Giddens once discussed this point already but for this blog I’ll not try to find out where he did.

Monday, June 08, 2015

Art as a daily practice


In his “Afterword” to Michel de Certeau’s Culture in the Plural, Tom Conley writes: “[For de Certeau] ‘culture’ needs to be understood not as a monument celebrating human mastery of nature but, to the contrary, and more modestly, as collective ways or manners of thinking and doing. ... [Culture] is marked by heterogeneity of practices, styles, modes or fashions of selectively and affectively producing (but not arrogating) habitable space.” (Conley, p. 151). In other words, according to de Certeau culture is not something highbrow, as it is often seen, but it is the way we do what we do, and it can even refer to the most banal actions and kinds of behaviour. In this view, culture consists of modes of doings characteristic for certain groups or even societies.
When I read de Certeau’s Culture in the Plural (and other books by him) and Conley’s “Afterword” for the first time several years ago, this view was not new to me. I subscribed to it already long before I had ever heard of Michel de Certeau, let alone that I had read his articles and books. I had borrowed the idea from authors in the field of cultural anthropology. But are both views – the “highbrow view” and the view of culture as the mode of daily practice – really so different today? Take the picture at the top of this blog. I have taken it on the yearly art market in my town, one week ago. What you see there is my stall with some of my photos and books and on the background a super market. Before or after having done their shopping, many people made a walk along the stalls of the art market. Some bought a piece of art; most didn’t. Is there a better example of the growing contemporary integration of culture as the mode of daily practice and highbrow culture, which is often supposed to be at a distance from the hectic of daily routine? Art is no longer something we need to watch in the serene atmosphere of a separate temple-like building, be it a theatre or a museum, and that we take in full of awe. Art is no longer something performed by demigods and explained by expert interpreters. No, art has become for everybody and by everybody. You can enjoy it everywhere and do it everywhere, as a part of your normal activities; also when you are in a supermarket or before and after shopping. It has become a part of the daily practice and it is consumed as easy as a cup of tea or a bag of chips. Isn’t it what we have aimed for, when we talked about the democratization of culture? Oh, and don’t forget the milk or the mayonnaise.
Source: Michel de Certeau’s, Culture in the Plural. Minneapolis/London: University of Minnesota Press, 1997. Tom Conley, “Afterword: A Creative Swarm”, in id., pp. 149-175.

Monday, June 01, 2015

A bird in a cage


Last week, I stated that man is a prisoner of his or her own habits and routine. Even if the door of the prison is open, s/he doesn’t use the opportunity to escape, as any animal would do. Is it true? Maybe man is more rational than animals. Why should s/he escape when the door is open? Once you are free, you have to decide for yourself; not only now and then but always. You can do anything you like, indeed. However, if everything is possible in the end nothing is possible. For how to choose? Moreover, once you take a step, it limits the number of the next steps you can take. When, for instance, on your own walk through life you reach the bank of a river, your choice where to go seems almost without limit, but once you choose to spring in the river, your number of choices will be reduced to four: Going back, swimming to the other bank, giving in by following the stream, or becoming recalcitrant by going against the current. And do you know where it will bring you, whichever decision you take? Most men are not adventurous and don’t have enough insight in order to be able the take the right choices in all unexpected circumstances – or at least in most – so that it is wiser to stay where you are: In your cage. And because you know that the door is open, you keep the freedom to leave when you get an idea what to do outside, with the possibility to go back when you like. Seen that way it is not unreasonable to stay where you are and limit your space of freedom in practice to your cage.
Or is this freedom an illusion? For whether the door of the prison is open or closed makes for most people no difference at all! Even if it is open, they don’t see that it is open. They see no cage. They simply think that they are free and can go where they like. Why this is so has been made clear by the feminist philosopher Marilyn Frye. Although her metaphor has been developed for explaining the idea of oppression, I think it can also be used for making clear why many people have the illusion that they are free. Let me first give a long quote from Frye’s article “Oppression”, where she puts forward her picture of the bird cage:

Consider a birdcage. If you look very closely at just one wire in the cage, you cannot see the other wires. If your conception of what is before you is determined by this myopic focus, you could look at that one wire, up and down the length of it, and be unable to see why a bird would not just fly around the wire any time it wanted to go somewhere. Furthermore, even if, one day at a time, you myopically inspected each wire, you still could not see why a bird would gave trouble going past the wires to get anywhere. There is no physical property of any one wire, nothing that the closest scrutiny could discover, that will reveal how a bird could be inhibited or harmed by it except in the most accidental way. It is only when you step back, stop looking at the wires one by one, microscopically, and take a macroscopic view of the whole cage, that you can see why the bird does not go anywhere; and then you will see it in a moment. It will require no great subtlety of mental powers. It is perfectly obvious that the bird is surrounded by a network of systematically related barriers, no one of which would be the least hindrance to its flight, but which, by their relations to each other, are as confining as the solid walls of a dungeon.

This picture used by Frye for grasping why it is so difficult to see why and when oppression exists can also be used for grasping why many people think that they are free, even when they actually live in a cage. For most people just stand too near to the wires and see only the wire that is right in front of their eyes. This gives them the idea that they are free: Isn’t it so that it is easy to go out by walking around the bar? However, if they would do a few steps back they would see that they are caged in ... and maybe they would see also that there is a door that is open.

Quotation from Marilyn Frye, “Oppression” on
http://people.terry.uga.edu/dawndba/4500Oppression.html

Monday, May 25, 2015

No way out

An animal runs away when the door is open, but man doesn't want to escape from his self-made cage

Somewhere in his Essays Montaigne writes about marriage: “It happens, as with cages, the birds without despair to get in, and those within despair of getting out.” (Essays III, 5) It’s true, Montaigne doesn’t write that all marriages are that way that one wants to escape, once one is in. Nevertheless he thinks that it is so most of the time.
Does this quotation apply only to marriage? I think that its meaning is wider and that it is applicable to most human institutions and habits, whatever they are. It’s true, many people feel happy in their self-built cages, but how often doesn’t it happen that once a certain stream of life, a certain habit, an institution or whatever we are doing or whatever situation we are in – alone or with others – becomes a routine, we become dissatisfied with it and we are not pleased with it any longer? Maybe this feeling is not present at the surface and not all the time, but in our hearts we feel that something has to be changed and deep down there is a hidden discontent. But does man use the freedom to go out once s/he gets it? Look at an animal in a cage and see what it does, when you open the door. After some hesitation it goes outside and once there it runs or flies away. Maybe it comes back in the evening for getting food and shelter, but after a few days it is accustomed to its freedom and you’ll never see it again. However, if the animal is a man, as a rule s/he stays where s/he is: in the cage. For human beings stick to their habits, even if there is a way out.

Monday, May 18, 2015

Caught in your mind


Some people are caught in their minds. They don’t have flexibility in the way they think. As things have done in the past, so they must be done in the future. Or once they have developed ideas how things should be arranged in the world, about what is good and what is wrong, they stick to it and they are not open to the fact that many people in the world think otherwise, about details or about the mainlines or about both. “I am right or my group is right and the others are wrong, a little bit or completely.” They cannot ignore those who have different opinions and probably they cannot change them, but “my way is better”, or at least that is what they think. Or “our way is better”, for hardly anyone stands alone in his or her views. Most people leave it at that and they manage to live with the others who are not like them. And “we”, the flexible ones – or so we see ourselves – succeed to live with them, and we leave it also as it is, most of the time. Why not? If the baker is prepared to sell me his bread, thinking that he sells the best bread in the world and that other recipes are inferior to his one, it is okay, as long I am satisfied with what he produces. And maybe the brown bread bakers fight with the black bread bakers about the best colour of bread, but most people don’t mind about the colour, or it is merely a theoretical discussion. Although, ... I remember that in the 1950s in the Netherlands, when I still was a child, the religion of bakers was really important, even when they produced the same quality of bread, brown or black. Protestants bought bread preferably from protestant bakers and roman-catholics preferred roman-catholic bakers, even in case it took more effort to go to a baker with the right religion. And you did not only do so when you wanted to buy bread, but the whole Dutch society was organized according this principle that people went around with people of the same religious and political views. It was called “pillarization”, and the main pillars were the protestants, the roman-catholics, the socialists and the liberals. This last group consisted of those who could not or did not want to be classified in one of the other groups. But people lived peaceful together and the leaders of the pillars solved problems that might arise in one of the backrooms of the parliament and other relevant institutions.
The situation becomes problematical, however, when a group becomes zealous and wants to spread ideas in an active way that’s is more than simply making propaganda. The situation becomes yet more serious when such a group starts to do so with violent means. Then it is only one step to terrorism if not civil war or outright war. In case the group succeeds – which happens too often – we have dictatorship, often cloaked in an ideology and covered with a name that pretends to show enlightenment. In order to guarantee that the ideas remain pure, the victors fence themselves off in order to prevent that evil ideas (and persons) come in and that those people who don’t want to conform go out, for who is so stupid to want to leave paradise?
I had to think about all this when I recently was in the former German Democratic Republic (GDR) and visited there the Border Museum at Sorge, near Wernigerode. There I saw fences with barbed wire, a watchtower, guard posts, etc. left as warnings for the future when thoughts come to a standstill and people fence themselves off, literally, in order to prevent that established ideas might change and to make that they become frozen at the moment they are considered best. And in order to make that those who are so happy to live on the inner side of the fence and who are not yet convinced of the superior ideas at the moment the gate is closed will accept the ideas that bring them heaven on earth, like the communism that was the reigning ideology when the fencing near Sorge were built. But as history has shown and will show again and again in future, maybe we can shut up a person or a group but we cannot shut up a people and we cannot confine ideas. In the GDR, people rose in revolt, the Berlin Wall fell and with it the Iron Curtain that closed off the eastern part of Europe from the western part. Only here and there parts of the curtain remained, as a warning and as a way to tell us that the mind cannot be caught and will never lose its freedom to think, even if it can happen that individual minds and – when these are put together – group minds cage themselves and others with them.

Friday, May 01, 2015

The meaning of the ordinary


At the end of my last blog I wrote that selfies are seldom taken when you feel bad. Usually it is so that photos are taken of themes with a positive meaning; themes that are more than simply neutral let alone negative. Selfies, and by and large photos taken of yourself (and of other people not being you), don’t say: “That’s me ...” but “That’s me!” This is just an instance of a common characteristic of much photography. As Pierre Bourdieu analysed so well in his famous book An art moyen (A mean art), “You don’t photograph what you have before you all days” (p. 57). Or rather, that’s what many people think. Of course, what is “normal”, and so what is not photographed, depends on your point of view. What is everyday and ordinary for me, may be a piece of beauty or an object of interest for a tourist! The old door of my barn that almost falls from its hinges and urgently needs to be repaired may be very attractive for a passer-by. As Bourdieu tells us: “The tourist or the stranger are amazed, when they photograph everyday objects or persons in the setting of their regular activities” (ibid.). Who did say that a thing of beauty is a joy forever? It depends on your standpoint.
This makes clear that what is considered mean, average, ordinary, common – or how you want to call it – is not as mean, average, ordinary or common as often is thought. Just that it is so makes a thing meaningful – or most of the time. It says that the object or activity concerned is a routine part of its setting: It is so well integrated in its surroundings or flow that it is not conspicuous any longer. You need to be an outsider in order to see it, or the object or activity need to be taken away or stopped in order to realize its significance. Holidays change your feeling for what is photographable, to express what Bourdieu says in other words. This is also the case in another sense. Poverty is seen and felt by the poor and they feel ashamed to see it on a picture – and who wouldn’t? –, but tourist make such pictures, because they think that it is so picturesque ...
Lately someone told me that I make pictures from such special positions, implying from such unusual, banal or ordinary viewpoints. I see it as the compliment it was meant to be. My view on the world is not innate but something I have learned during my education as a sociologist and philosopher, so it is something everybody can learn. Being as it may, what is important is that we learn to look and that we realize that not only the exceptional is valuable but that also the mean, average, ordinary, common etc. is. For isn’t it so that the exceptional can only exist because there is something we find mean, average, ordinary, common etc.? That the exceptional is shaped by the normal? Even more, if the mean, average, ordinary, common etc. wouldn’t exist, we couldn’t live, for just these – so the routine – give what we exceptionally do and what we positively value as an exception (but also what we negatively see as exceptionable and reprehensible) its foundation. Maybe the mean, average, ordinary, common etc. is the most meaningful of what we do. In the end we need to park our car somewhere if we want to visit a restaurant.
Reference: Pier Bourdieu (ed.), Un art moyen. Essai sur les usages sociaux de la photographie. Paris: Les Éditions de Minuit, 1975.

Monday, April 27, 2015

Your selfie and your soul

The image is the reflection of the soul

In his Philosophical Investigations Wittgenstein writes: “The human body is the best picture of the human soul.” (Part II, iv) In that context Wittgenstein gives the word “soul” a religious meaning, discussing the view that “[r]eligion teaches that the soul can exist when the body has disintegrated.” (ibid.) However, I think that we can give “soul” also a wider meaning, for example we can read it as “mind” or as “inner life”. Seen that way the idea expressed in the first quotation is in agreement with recent discoveries in neuroscience, especially with the discovery of mirror neurons: It has become increasingly clear that there is a direct relation between the way I feel and the expression on my face. It’s even so that if I consciously produce a certain expression on my face, say one of sadness or one of joy, I tend to feel that way, as you’ll remember from my older blogs.
One of the consequences of this relation between inner feeling and facial expression is that I can read someone’s frame of mind on his or her face, although it can happen that the other tries to mislead me. For it is possible to suppress the “mechanism” and consciously make that the expression on the face is not in line with the inner state. However, as a rule, when I look at the face of another person, I can say something about that person’s inner feelings, about his or her inner life. Actually some people are better in it than others. Most of the time this – what I could call – “mind reading” is not a conscious activity. Often it happens that we don’t realize that we read the mind of another in front of us. This can make, for instance, that my feeling (and body!) automatically adapts itself to the feeling of the other. Who doesn’t know the phenomenon that we start to yawn, if we see someone yawning, or that we become sad (or just happy) when we see someone crying (or just see laughing)? It can even extend to a whole group: One person laughs and everybody present starts to laugh, too! Man is a social being to the core and more and more it becomes clear that the relation between inner feeling and outer expression is the basis of human sociality, or at least for a major part.
Now, I think, also the sense of making selfies becomes clear, and –this is essential – why they are shown to others. For what is more obvious than making a picture of yourself and presenting it to the world in these days where looks and appearance have become more important than ever and where showing yourself has also been better possible than ever? If direct face-to-face relations are absent, nowadays there is no better reflection of your self than a selfie, for it gives both an image of your outer self and of your inner self, since the former mirrors the latter. A selfie gives a complete image of your I, or rather of your positive I, for selfies are seldom taken when you feel bad. But as long as you feel good it is a reflection of your soul.

Monday, April 20, 2015

Self in the era of selfie

Selfie

Today we live in the era of images. Originally, making images was a real craft left to professional painters. With the arrival of photography (and film, but here I’ll talk only about photography), at first not so much changed. Making images was still left to professionals – photographers who mainly worked in studios – and a few exceptional hobbyists. This changed with the production of the Brownie camera by Kodak in 1900 and the introduction of the Leica 35 mm camera 25 years later. Now everybody could become a photographer, and indeed, more and more people took a camera in their hands.
Nevertheless, photography did not yet become a mass phenomenon. It was still mainly done by professionals and – it’s true – a growing number of amateur photographers. Having a camera was still not widespread. Making photos was specialists and also many amateurs developed their own films and printed their photos. Many of them were organised in clubs. No longer photography was seen as an art but as a technique, despite famous names – to mention a few of my favourites – like Henri Cartier-Bresson, Walker Evans or Ed van der Elsken. If it was seen as an art, it was considered “an art moyen” (Pierre Bourdieu – so a “middle-brow art”, or an “average art” or a “middle-class art”). This remained so till from the 1960s on photography got a new boost, when cameras became more advanced and got automatic functions. Especially the first completely electronic camera produced by Canon around 1980 has to be mentioned here. With the introduction of compact cameras and pocket cameras photography became a mass phenomenon. What also happened is that the status of photography went up again: since the 1990s good photography is again considered art.
However, all this is nothing compared with what happened by the introduction of the digital camera. The basic technology existed already since 1975. Initially the quality of digital cameras was poor. But from the 1990s on the technology became so much better that nowadays every camera sold is digital and cameras for film are difficult to get or it must be second-hand. The digital camera technology has beaten the analogue technology, although this doesn’t imply that always the artistic expression of digital cameras is to be preferred.
Digital photography has not only become a mass phenomenon. It has become more than that. Making images is so important now that we can say that present society has become an image society (what had been already foreseen in 1985 by Vilém Flusser in his Into the Universe of Technical Images). Today we don’t take photos only with special cameras, but everything which is digital can become a camera. Especially mobile telephones have this function. And, the other way round, also cameras tend to get functions that are not photographic. Sending photos directly from your camera to the Internet is only a first step.
All this has led to new ways of using photos and to new ways to present yourself. More and more photos are uploaded to special websites or to your Facebook pages, to Flickr, and so on. Also what people photograph has changed. Hobby photographers who make pictures of landscapes, townscapes and themes they find interesting still exist but most people make only two types of photos: Pictures of their holidays and places where they just have been, and pictures of themselves. With the latter I don’t mean portraits more or less in the classical sense, but pictures with the meaning “I am doing this”, “I am doing that”; “I am here; look me”, “I am there with x”. Or “Just me behind my PC”. And such photos, lots of photos, are immediately sent to Facebook, Twitter, Flickr, Instagram and other special photo pages. Some people show there so many hardly different photos of themselves (often taken by themselves, so “selfies”),  apparently under the motto “it’s me”), that I wonder – both as a photographer and as a philosopher – what the meaning of all this is. Actually, I know it, of course. In the era of individualism and ego-expressivism, it’s a way of ego-showing. In an era in which appearance has become so important, just because images are everywhere, your image is what you are, so you show it. Media are everywhere so everywhere you can be seen, so use it and you’ll be seen. Inside and outside your network. Appearance = to be seen = to make to be seen. Or: I can be seen, so I will be seen, for what I am as a person is my image. My selfie is who I am. This is the new development; this is the new trend. Even more, it is a step to a new era, if we can believe Vilém Flusser.

On a certain social networking website my profile photo is a picture of my books study and I have many other photos there, too, but not one of myself. On the other hand, I have a rather comprehensive verbal description of my interests and doings. In my view, it gives a good impression of who I am, and I think that it is sufficient for starting a nice conversation. Nevertheless, often I receive comments like: “Why don’t you have a photo of yourself in your profile? Now I don’t know who you are”. As if it is not so that my verbal self-description says much about me and as if it is not so that all my photos there, especially the photo of my study, say much about what kind of person I am. Today you need to present a photo of your face for showing yourself, for only such a photo shows who you are, even if the head on the picture might be empty.

Monday, April 13, 2015

The body and the self (2)

It's me

When I saw someone yesterday and today I think that I see her again over there but I am not sure of it, I try to remember in detail how the woman I saw yesterday looked like and I compare her with the woman I see now, and then I draw my conclusion: She is the same person or she isn’t. However, when I can ask her “Is it possible that it were you whom I saw yesterday at the bus stop?”, I do not expect that she tries to bring up from her mind a physical description of a person at the bus stop yesterday and compares it with her appearance and then says: “Yes, it was me” or “No, it wasn’t me”; or otherwise that she compares my physical description of the person I saw yesterday with her own appearance. That would be weird. No, we expect that she says “yes” or “no” from what she remembers about what she did yesterday. So there seems to be a difference between a person’s identity from the third person’s perspective and from the first person’s perspective. Nevertheless this doesn’t imply that physical appearance isn’t important for someone’s identity from the first person’s perspective, for why else should people wear masks on certain occasions, use make-up or wear beautiful clothes? And, from the third person’s perspective, when a person has lost memory, isn’t it clear that this amnesia, even if it’s “only” partial, can have an enormous impact on that person’s personality?
Maybe that’s why so many persons find it important to publish photos showing the face on social networking websites like Facebook, supposing that such a photos show who they are.

Monday, April 06, 2015

The body and the self


The case of getting a new body is much discussed in philosophy but then in the form of a body or brain switch between two people. For the first time this has been done by John Locke (1632-1704), who analyzed the case of a prince getting the body of a cobbler. Since then the discussion has never stopped. It is mainly about the question: what determines the self? Basically there are two views. One is that it’s the body that makes up the self; the other is that the self is mental, be it in the “hard form” of the brain, be it the mind, or be it the memory. Sergio Canavero, who actually wants to perform a body transplant and whose ideas I have discussed in my last blog, apparently thinks that the self is mental (laid up in the brain or head). My standpoint is that it is mixed: the self is made up of bodily and mental characteristics. However, most philosophers think that the self is only mental.
Now I want to discuss a case that I’ll quote by heart, since I am too lazy to look up who presented it first. Maybe it was Bernard Williams, maybe it was someone else.
Let’s say that a doctor, who is an adherent of the mental self theory but who is also a famous body transplant surgeon, tells you that your body will gradually decay and that finally you’ll feel a lot of pain. You are shocked. But then he says: “I have a solution. I can give you a new body”. You become very happy. You just started to train for the marathon, and it is your great wish to run it within three hours. If your body wouldn’t decay, it would really be possible for you have the right physical constitution. So you agree, and the next week you are successfully operated.
You have a quick and complete recovery and the new body feels like your own. So you start to train for the marathon aagain. You are an experienced runner so in your mind you feel already the suppleness of your legs when running. But when you take up your training again, you are stiff. “Okay”, you think, “that’s normal after having been inactive for so long”. Your legs and body gradually improve but after a year they still do not feel fine and the way you remembered about your first body. Therefore you go to a sports doctor. She does a medical examination and the result is: With your present body you can run a marathon within five hours and four hours will be the absolute limit. You are very disappointed, for you are no longer the long distance runner you thought to be. The body transplant has been to no purpose. An illusion has been broken.
My story looks fantastic but it’s the life story of everybody who becomes older. When time goes on, the body starts to decay. It can happen after you have become thirty years old or after you have become forty or whenever, but sooner or later your body loses its youth. Your performance will go down in an absolute sense. For example, if you are a runner or cyclists, your average speed will go down. But in your mind you still feel young. Many people say so: That they feel young in their minds but that their bodies doen’t cooperate any longer. The body has become older but the mind hasn’t (or so it feels). When you have become sixty years of age, actually you have undergone a body transplant: Your supposed thirty years old mind has (gradually) got a sixty years old body. But since the self is in the mind (or at least it is mental and not physical, even not partially), you are still basically the thirty years old person you once were (of course, plus some thirty years of memories of experiences you passed through). Or rather, this is what follows from the idea that the self is mental.
Do you belief it? Being a runner, too, I still feel in my mind the suppleness of my legs from the time I run my personal bests. Really. But the days of my personal bests have gone already long ago and I know it. The feeling is real and it is an illusion. I have changed through the years and everybody will tell me if asked. And my self has changed with it, even if it tells me otherwise. For young people I am “that old man”, for they judge me from my physical appearance. And in fact my thoughts are to a large extent guided by what I can do with my body, which is reflected then in my mind. In other words, who I am – so my self – is more than what I feel, my mind, and – what I haven’t discusses here – my remembrances. It comprises also the features of my body (and actually also features of my social life). But who would have thought otherwise except a philosopher?

Monday, March 30, 2015

Getting a new body


Sergio Canavero, an Italian neuroscientist, asserts that he can transplant a human head on the body of a donor whose brain has died but whose body is still healthy. He thinks that he will need a preparatory period of about two years and then he can do the transplantation. Or so he says. Canavero has found already a candidate who wants to give his head for the operation: A Russian man called Valery who suffers from a serious neuromuscular disease. Will it be possible? Then I am not thinking of the technical possibility of the operation. Such a transplantation will certainly not be possible within two years but sooner or later it can be done and I am convinced that it will be done. But what will we get then? Canavero and his future patient seem to think that we’ll “simply” get another Valery (or who else will be the patient) with the same head as before but only with another body, and nothing else. But is this what we’ll get?
The basic idea that it works this way goes back to the philosophy of René Descartes (1598-1650). According to him, man consists of two entities, joined together: a body and a mind. This view is called Cartesian dualism or substance dualism. It says that body and mind are two different substances, and fundamentally they can be separated. What really makes up our personality is our mind in this view. If it were true, it would imply that we could put a head (which is supposed to contain the mind) on another body, indeed. The only practical question would be whether both parts will well grow together and then form a material unity.
However, is it true? Is it possible to change the body for another one (where “body” means the part below the head) just as we can change clothes? Of course, we like one pair of trousers more than another pair, because it fits better or because of its colour, but basically they all fit. I think that getting another body is not that simple. If you get another body you’ll become another person.
In an article on personal identity I discussed the case of two runners who swapped bodies. In order to make the present case more plastic let me suppose that our Russian patient Valery gets the body of a woman. Or, making the case even more plastic, let me suppose that Valery, a white man, gets the body of a black woman (or a black man; actually it’s not important for my argument). I think that this makes clear that there are other aspects in a body change than merely the technical aspects that the head and the body must technically fit together (the “wires”, like spinal cord and blood vessels, must be connected) and that the body must not reject the head (or the other way round). There are also wider medical aspects and there are psychological and social aspects as well. As for the wider medical aspects, for instance every body has an endocrine system that has an important effect on how the body works. It is not so that we have a system for our head and another one for the body, but we have one for the whole. It regulates in an important way how the body works and how it behaves. There are individual differences between one man and the other and these have a deep influence on what kind of personality we are, for example whether we are a man or a woman, our sexual behaviour and much more. As for the psychological and social aspects, how you look like, how you behave and so on have such an influence on how you are treated and on the kind of person you are that it is hard to imagine. Ask a black man in the USA about what it is to be black in a society where the standards are white, and he (for example Barrack Obama) can fill hours with his stories.
Here in my blog I can present my arguments only superficially. I don’t have the space to discuss them in detail. However, also when Valery would get the body of someone who is more like him (so the body of a white man) problems of the kind I have just touched would still exist, albeit it maybe in a lesser degree. What I want to make clear and put forward is this: If someone gets another body, she or he does not simply get another body. It’s not like getting another coat in a different design. Of course, some “technical” problems have to be solved: the “wires” have to be connected, you have to learn to walk again etc. etc.; but Canavero seems to think that after a year or maybe two years you’ll be back on stage as before. Just this “as before” is the crux of the matter, for the change of getting a new body will not be marginal but substantial: A new person will be born. But also: another person has died. Is that what we want? That’s what a body transplant is about and not about whether we’ll be able to connect the wires.
Sources: De Volkskrant, March 21, 2015 (“Sir Edmund”); Surgical Neurology International 2013, 2015; my “Can a person break a world record?”, on: http://www.bijdeweg.nl/PersonalIdentity.htm

Monday, March 23, 2015

Happy words


Maybe you’ll not remember it, for it’s already five years ago that I wrote it, but once in a blog I told how I ride better with a smile on my face when making a bike tour. This is exactly in line with what I newly wrote about the movements of the body and the way you feel, and especially about the relation between the expression on your face and your feelings. Of course, this has a wider application than only the practice of sports. Trainers in interpersonal communication, for instance, make use of the relation between bodily expression and feeling. They advice to adapt your physical expression to the situation you are in. Then you do not only make a better impression on the others present, but you feel yourself also better adapted to the circumstances and you feel like you are supposed to behave. But if such a relation exists, especially between facial expression and feeling, then it must also be easy to integrate this phenomenon in your daily life. For, as I see it, you can do this when you do something you have to do anyway: talking. Just choose the right words and you’ll become happy. Not by choosing words with the meaning of happiness but by choosing words that have a happy sound, or rather a sound that you can only utter by smiling. How does it work? To quote Darwall: “Subjects who are asked to pronounce phonemes involving muscle activity implicated in characteristic emotional facial expressions tend, when they comply, to feel those very feelings.” For instance, the sound o is made with another expression of your facial musculature than when you say an e and therefore they give you different feelings, when you pronounce them. Is it mere chance that saying words like “sorrow” and “gloomy” arouse corresponding feelings within you? Apparently it is not only the meanings of the words that do but also the muscles in your face. But, surely, it can also work in the opposite direction. Saying an e is done by producing a smile and smiling makes you happy. So, say “cheerful” and you’ll feel cheerful.
What does this mean for us? There are many ways to try to become happy. One of them is the way we talk: Simply use “happy words”, so words that you have to pronounce by producing a smile on your face. Say “pleased” and not “glad”; “grief” and not “sorrow”; or – something else – “street” and not “road”. If you do, you’ll feel much better, only by the way you speak. Or, as Darwall says it: “There is more to saying ‘cheese’ than we might have imagined.”

Source: Stephen Darwall, “Empathy, sympathy, Care”, in Philosophical Studies, vol. 98 (1998): 261-282 (quotations on p. 265). (http://deepblue.lib.umich.edu/bitstream/handle/2027.42/43412/1109?sequence=1 )

Monday, March 16, 2015

Empathy and sympathy


Two weeks ago I published the photo above by way of illustration for my blog. I had taken it especially for this occasion and it was supposed to express the idea of empathy. But does it really do? Empathy is a complex notion that has got many different interpretations. We have seen this yet in my blog last week. Within limits it is a bit arbitrary what meaning we should give it. However, I think that one thing has become clear from my discussion: Empathy refers to a kind of reflection of another’s emotion or experience within me. After the discovery of the so-called mirror neurons this needn’t be something vague but we can give it a physical foundation, as I have done so in my blogs as well. Empathy makes that I am a bit like the other whose feeling I reflect. Empathy can reflect all kinds of feelings, from cheerfulness until sorrow and a lot in between.
In a photo I can express only one kind of empathy; I cannot express empathy in general. Even then, I think now that the blog photo two weeks ago is not to the point, for it doesn’t show a kind of reflection of the feeling of one person in another person. This doesn’t mean that the photo is a complete failure, for it does express something that is often confused with empathy (so also by me). We see a hand on a shoulder in a gloomy picture (it’s on purpose that I had made the photo rather dark and that I had made it black-and-white). But such a hand on a shoulder is generally not supposed to mean that the “hand-person” has the same feeling as the “shoulder-person” but that former is concerned about the latter and that the former cares for the latter. In other words, the photo expresses sympathy.
Although sympathy and empathy are related, they are different. For explaining this, let me quote Stephen Darwall’s definition of sympathy. According to him, sympathy “is a feeling or emotion that (a) responds to some apparent threat or obstacle to an individual’s good or well-being, (b) has that individual himself as object, and (c) involves concern for him, and thus for his well-being, for his sake.” In short, sympathy refers to feelings for another person that is in a difficult situation and needs help or support. Nothing of all this is necessary for empathy. The other doesn’t need to be in trouble or have a difficult time. I can also share the joy another experiences (for having passed an exam successfully, for instance), and I am happy because the other is happy. I can also feel empathy when I am watching a play in a theatre. I just feel, also if I don’t have a personal relation to the other. In case of sympathy I am concerned for the other but not because I reflect the feeling of the other within me but for his or her sake. I care for the other also when I don’t have the feeling of the other. For instance, the mother of a person I know has died and I am present at the funeral for expressing my sympathy, but this doesn’t imply that I am sad. I simply show care for my acquaintance because I know that my presence will be very much appreciated by him. Being worried or concern are words that best express our feelings when we have sympathy for someone. Therefore we can say, in philosophical terms, that when we have sympathy we see the other from a third-person perspective, because we know what the other feels but we do not necessarily share this feeling, unlike in the case of empathy which supposes a first-person perspective, for only by becoming the same as the other in a certain way, we can know what the other feels.

Source: Stephen Darwall, “Empathy, sympathy, Care”, in Philosophical Studies, vol. 98 (1998): 261-282 (quotation on p. 261). (http://deepblue.lib.umich.edu/bitstream/handle/2027.42/43412/1109?sequence=1 )