In his Tractatus Logico-Philosophicus Wittgenstein wrote: “The world is independent of my will.” (6.373) And he explains it by saying: “Even if everything we wished were to happen, this would only be, so to speak, a favour of fate, for there is no logical connexion between will and world, which would guarantee this, and the assumed physical connexion itself we could not again will.” (6.374) But if this were true, what then is the relation between my will and the world? Is my will then outside the world and is it no part of this world? But this would mean that there is a second world, which contains my will (for my will must exist somewhere). And what is this second world then and what is the relation of my will to it?Moreover, we can apply Wittgenstein’s reasoning to anything else: the existence of bikes, trees, rocks, and so on. (note the wording, for Wittgenstein says: “The world is everything that is the case. The world is the totality of facts, not of things.” 1.1.1) But what do we mean then when we ask whether there is a free will? What does it mean then that some say that experiments show that we first start to act and only then develop a will to perform the action concerned? (Libet and Wegner, for instance) Reasoning in Wittgenstein’s way, life would not be a part of the world, or at least not of the “primary world” he talks of. And, whether we have a free will or whether we haven’t (but I think we have, at least in some sense), what does acting then involve if it doesn’t mean performing something in the world? There is only one world, and will and willing are a part of it, as does everything there is.
Monday, April 29, 2013
Recently I had to think of an article, or rather a book excerpt, that was one of the first pieces I had to read, when I started studying sociology: “The other-directed man” by David Riesman. It had been included in a reader with articles and book extracts and I read it again. It was just as I thought: Although it had been written 60 years ago, it was still very relevant.
Riesman distinguishes three types of persons: the tradition-directed type, the inner-directed type and the other-directed type. The tradition-directed person steers his (or her) life with the help of traditional values, norms and goals, as he learned them in his childhood. These values etc. give him his place in life and society and determine the scope for what he can and cannot do. This type of man is typical for strictly stratified societies where social change is at a minimum, such as the medieval society.
When such a traditional society begins to change more rapidly, as it happened for instance in Europe at the end of the Middle Ages, a new type of man comes to the fore: the inner-directed type. Also this type of man learns his values, norms and goals in his childhood from his parents and other influential adults, of course, but the values etc. are no longer those prescribed by society, but they are individual and serve as lifelong orientations that guide the major decisions in life. The person’s internalized goals are very generalized (Riesman mentions wealth, fame, goodness, achievement as instances) and one may fail to reach them, but one never doubts their guiding value. Riesman calls inner-directed people “gyroscopically driven – the gyroscope being implanted by adults and serving to stabilize the young even in voyages occupationally, socially, or geographically far from the ancestral home”.
But today, now that society changes exceedingly quickly, another type of person comes up: the other-directed man. Such a quickly changing society requires a more resilient type of person; one who lets himself be oriented by the opinions of the people around him. His conformity to society is no longer an internally acquired guide of values etc. but a “sensitive attention to the expectations of contemporaries”. Goals have become fluctuating and short-term, and the other-directed person is no longer steered by an internal gyroscope but goals are “picked up … by a [internal] radar.” One gets this radar also in childhood from the parents and influential adults, but now these relevant others “encourage the child to tune in to the people around him and any given time and share his preoccupation with their reactions to him and his to them” (my italics).
Of course, “pure” persons, who belong completely to one type, do not exist, let alone that a whole society of people of one type exists. It’s a matter of degree to which type a person belongs, and he or she is always a mixture of types, as Riesman stresses. However, one type tends to gain the upper hand in a certain society or in a certain period.
Riesman’s analyses of types of persons help me understand what is going on in society today. Although in the days that Riesman wrote his sentences the other-direct man was yet a new type that was not yet very wide spread (Riesman thinks of the USA and parts of Sweden, of Australia and New Zealand), now, 60 years later, one gets the impression that it is becoming the general type of man – anyway in Western society (but certainly not only there) and among the younger generation. It is not difficult to give examples that underline the present other-directedness of modern man: Twitter, Facebook, YouTube and so on are all expressions of the new type of modern man that is developing and that exists already to a high degree. Just these new media are used for telling your occupations to the world, sharing them with others, encouraging reactions from others, and participating in the occupations of others by giving your reactions. In this way, your internal radar picks up the expectations other people have of you, so that you can adapt your short-term goals and your behaviour to them. It’s what we do in our status updates or tweets and by sending our “likes” (or by our invitations to send them). Or in publishing our most private photos on the Internet, showing what we do to others and hoping that it fits what our relevant others think of us.
Source: David Riesman, “The other-direct man”, in Dennis H. Wrong and Harry L. Gracey, Readings in Introductory Sociology, The Macmillan Cy, 1967, pp. 610-616.
Monday, April 22, 2013
The research mentioned in my last blog on the mental representation of dreams is an important step forward in brain research. It is in line with research results that I have discussed before in my blogs. The essence is that they show that objects in the world around us but also our virtual images are represented in our brain in some way. As such it is no surprise, but there is a difference between supposing how things are and seeing a supposition substantiated. We are still far away from really knowing how objects – real or virtual – are represented in the brain, but this type of research helps us understand how the brain is structured and maybe also how we think.
But does this imply that the concepts that refer to such representations – or “forms” in the Platonic sense – are also in the head? Without a doubt concepts have a place in the brain. Many studies have shown that brain damages can lead to serious damages of our conceptualizations or even can make that we fail to remember certain concepts that we did have before the brain damage happened. Nevertheless this doesn’t mean that concepts exist only in the brain. Concepts are constructions of how the objects in the world are like, of personal histories and of how other people see the objects. The latter makes concepts intrinsically socially determined. Instances that show it abound. Take for example this. Once in Germany I was walking in a kind of nature park with a paper with questions in my hand. Somewhere I saw bird in a cage and the question was: What kind of bird is this? Since the answer needed only to be general my answer was “It’s an owl”. It appeared to be wrong. The right answer was that it was not an “Eule” but a “Kauz”. This made me realize that the birds that in Latin terminology are called Strigidae and in English are called owls (and in Dutch uilen) in German common parlance are divided into two groups: Eule and Kauze, a distinction that exists only in German and not in other languages. The first group refers to Strigidae that have a more or less slender appearance, while the Kauze are stockier and rounder. Moreover, Germans feel also that they are two kinds of birds. For them they are two general forms of birds corresponding to two general concepts, while for Dutchmen, Britons, Americans etc. there is only one general form and one general concept.What this instance illustrates is that concepts are not simply private ideas but that they are intrinsically shared with other people. This is not mere coincidence but it is the way concepts are formed. So, even if the forms of objects in the head are private, the concepts that refer to these forms have a social dimension. In this way, they exist not only in a single brain but are the property of all of people that participate in its production and reproduction.
Monday, April 15, 2013
You want to make a chair. What are you going to do? According to Plato we have innate ideas in our heads that show how the objects in the world look like. These ideas are more like blueprints or templates than the abstract conceptions that nowadays are called “ideas”. Therefore, they are also called “forms”. What you do then when you want to make a chair is that you call up the form “chair” from your memory and make a wooden (or stone etc.) copy of it, of course with your personal variations or with the variations demanded by your client. But does it really work that way?
Yukiyasu Kamitani and his colleagues of the ATR Computational Neuroscience Laboratory in Kyoto, Japan, asked three volunteers to have a nap in an fMRI brain scanner. While the test subjects were sleeping the scanner registered the activities of their visual cortices. When they started to dream, the volunteers were wakened and asked to tell what they dreamed about. From these dreams the researchers choose some simple objects like house, table, man, and so on. In the second part of the experiment the volunteers were shown pictures of these same simple objects, while the brain scanner registered again their brain activities. Then the volunteers had again to sleep in the fMRI scanner. When they had woken up, the researchers compared the scans made in this third phase of the experiment with the results of phase one and two and in about two third of the cases they could read correctly what the volunteers had been dreaming about. What the researchers saw was still rather abstract and when they concluded correctly, for instance, that a volunteer had been dreaming about a man, they couldn’t say whether this man was his neighbour or the Japanese Prime Minister or whoever, but anyway a first step has been done on the path of dream reading.
What does this mean? Paraphrasing moon walker Neil Armstrong, we can say that it’s one small step for the researchers but a giant leap for dream research. It will help us understand what dreams really are: Just epiphenomena of brain activity or ways of storing our recent experiences? Steven Scholte, neuroscientist at the University of Amsterdam, thinks that the implications are even wider: “At the moment there is a wide gap in our understanding how visual perception is related to concepts like ‘chair’ or ‘horse’. This kind of research will redefine what semantic knowledge is and how the external world is enciphered in the brain”. And this has philosophical implications, so he goes on, for “unless when you believe in ghosts, this representation in the brain is also what the external word is … When talking about a horse what do we exactly mean by ‘horseness’? What do we mean by ‘chairness’? On a fundamental level this is about what the world is and how we experience the world”.
Actually this fits well what Plato thought, for maybe ideas or forms are not innate, as he believed, it seems that Plato rightly supposed that you need to have a horse in your head in order to know that what you see out there really is a horse.Source: De Volkskrant, April 6, 2013: Science Supplement, p. V5.
Monday, April 08, 2013
Is lying worse than misleading? This question is discussed by Jennifer Saul in an article that I came across on the Internet. I found the question intriguing, maybe because I had never thought about it. That lying should be worse than misleading, as many people think, is puzzling, so Saul, for why would it be so if the result is often the same? Why should we then prefer misleading to lying? For misleading needs not be better than lying as we from the bank crisis know.
The idea behind the difference in preference may be that there are differences in responsibility in the case of lying and in the case of misleading. If you say: “My husband is not at home” to the visitor, in a normal situation he will believe you. If you say “I didn’t see him come home”, the visitor will also think that your husband is not at home, but it can be argued that he should have been smart enough to ask whether you may have heard your husband coming home (which you actually did). The idea is, that the visitor is responsible himself, at least for a part, for not drawing the right conclusion and for thinking that your husband still hadn’t arrived. But actually, in a standard situation there is no reason to think that you would be mislead, for why would you? This argument disproves also the idea that lying is a breach of faith and misleading is not, since normally you need not take what a speaker says literally and you can suppose that the answer to your question is complete and to the point and doesn’t contain hidden implications. The latter is not always the case however, for if you are a witness in court and you declare on oath that you did not see your husband coming home (although you had heard him), you cannot be prosecuted for perjury if the judged concluded that your husband wasn’t at home, for you didn’t say that.
For reasons like these it is not tenable that generally misleading is better than lying. How about the other way round? I think that if we would discuss this question we would come to an equal conclusion: lying is not preferable to misleading. On the average lying and misleading are as good or as bad. Their moral goodness or badness simply depends on the situation. So, if the visitor asking whether your husband is at home wants to murder him, throw away your moral objection that lying might be worse than misleading – which generally is not right, as we just have seen – and say simply that he isn’t there, even if it is not true.Source: Jennifer Saul, “Just go ahead and lie”, http://analysis.oxfordjournals.org/content/72/1/3.full
Monday, April 01, 2013
Say, we meet someone for the first time. How do we judge him or her then? We put them in one of the boxes that we have ready for it in our mind: the so-called prejudices or – with a less negative word – preconceptions. Where do these preconceptions come from? We learned them when we grew up, so from our parents, from other people around us and from the way such people are generally judged in the society we live in. Thus we judge people from another country or our neighbours, men or women, white, black or yellow people, and so on. The less we know about the stranger we judge the more we tend to apply our boxes for our judgments. Some people see through this mechanism and try to see the real person. Others never get the idea or never are able to see that such judgments are based on preconceptions.
Most people have several characteristics: they are both Frenchman and woman and black and … So they can be put in different boxes at the same time. Then we get a complicated image of the stranger, but it is still preconceived. What is interesting here is that the less interaction we had with the stranger before, the more the ratings of other people are based on our self-ratings of the traits judged (see for instance John A. Bargh and Tanya L. Chartrand, “The unbearable automaticity of being”: http://www.yale.edu/acmelab/articles/bargh_chartrand_1999.pdf ). This substantiates the idea that we have boxes in our head in which we put persons we don’t know.Sometimes the contacts with other persons are flimsy and superficial. We see them once and then never more. But it can also happen that the contact continues and even grows into a relation: the stranger becomes, for instance, our colleague, friend, partner, or it is a shopkeeper we see once or twice a week and with whom we always have a chat. Gradually our knowledge of what was once a stranger is deepened and we become more or less acquainted with him of her. Then we tend to put the sometime stranger less and less in our preconceived boxes and see him or her as a single person. Or so it is for most people. How this develops is mainly an individual process. For some people this process goes faster, for others slower. Some people keep always employing the preconceived categories for judging others in a certain degree, for other people the preconceptions fade completely away. Be this as it may, I always say: When I have seen someone three times, I forget how he or she looks like and I see only the person. And that’s also how we hope that the sometime stranger will go to think about us, for, as Montaigne said: “I very much desire that we may be judged every man by himself, and would not be drawn into the consequence of common examples.” (Essays, Book I, Chapter XXXVI, “Of Cato the Younger”)
Monday, March 25, 2013
I read a lot, especially when I am on holiday. Reading is just a part of my holiday and a holiday without reading is no real holiday for me. Also when I am travelling around and spend a big part of my day on moving, sight-seeing, visiting interesting sites and museums, and – not to forget – on making photos, there is always time for a book. However, on holiday I read other stuff than I do at home. What do I read then? Usually not philosophy but if I do it’s on philosophical subjects that are different from what I normally read. But I read history, for instance; a lot of history. Sometimes I read a novel and further anything else for which I don’t have time when I am at home or that I simply failed to read there. The books may have a relation to the region or town I visit, but often they haven’t.
I also buy books on holiday. I cannot pass a bookshop without at least taking a glance at what they sell. It’s very interesting to see what people elsewhere read and what makes the place I visit interesting in the eyes of the inhabitants. And, of course, often I don’t leave the shop with empty hands. Not uncommonly I buy something philosophical, something that’s difficult to get in my own town, or something that attracts my attention. In a strange bookshop you always find interesting books that you can’t buy at home or just failed to see there.
Lately during a weekend trip in my country, I bought something in a local bookshop, and, because it was Book Week, I got also a free book written by the Dutch author Kees van Kooten. Back in my holiday home, I opened it and immediately my eye was caught by this text:
“Who reads other books than local or regional publications when on holiday offends not only the local culture of the destination chosen but wastes moreover his precious holiday time”.
Actually, I should have brought the book back to the shop, for this free book had no relation at all with the town I visited, nor did the book I had bought. As just said, most books I read on holiday have no relation to the region I visit, and even less so I read local or regional publications. But is the quotation true? I think that it shows quite a limited view on why it is that we are on holiday. And I can say that since just I go often to rather unknown regions hardly visited by any tourist or it must be a lost Dutchman. Just for getting an impression how a country is like outside the well-trodden tourist paths.You can be on holiday for many reasons and getting to know another region and going into the local culture is only one of them. Many people go on holiday for relaxing, lying on the beach or simply being away from work and home in an exotic or at least different environment. If they come back home mentally and physically fit and well, the holiday is a success. Then local culture is simply a decoration that makes such a holiday more effective; it’s not something you really need to know about. Other people go on holiday for visiting museums and places of cultural or historical interest. Or for practicing sport under circumstances they cannot do at home, like cycling in the mountains for Dutchmen. I can list many other reasons for taking a holiday, but I think that my point is clear: whether reading something different than local or regional publications is a waste of time depends on the reason why you are there. And often one goes on holiday for a mixture of reasons. For me, one of them is just reading the stuff that I didn’t get round to read at home. And be sure, if I am back from a trip to the unknown interior of this or that country, I know a lot of its local or regional culture, characteristics and curiosities, even though I have read a lot that has no relation to it.
Monday, March 18, 2013
Philosophers generally accept that being free is a matter of having alternative choices. However, Harry G. Frankfurt showed that I can be free even when I had no choice, because the alternative chosen appeared to be my only possible choice (see my blogs dated Feb. 23, 2012, and Sep 3, 2012). Nevertheless, often real choices exist. Then I am free, anyway. Okay, I have yet to execute my decision, but after having done so, I can say that I have performed a free action. But is having alternatives enough for being free? For as Richard Holton says: “[I]t is not making the choice that is difficult, it is sticking with it”. Can I say that I am free if I can choose from alternatives and if I have begun executing my choice, but don’t bring the action to an end, although it was under my control to accomplish it?
Let’s say that I take the New Year’s resolution to lose ten kilos in the year to come so that I’ll get my ideal weight. I begin to eat healthier food and to eat more moderately; I don’t take crisps and the like any longer on parties; and so on. In short, I do everything I need do in order to lose weight and at the end of the year I have achieved my aim.
On the same New Year’s Day my friend John calls me and says that he has also decided to lose ten kilos. I tell him that I had taken the same decision and I propose to support each other, which he accepts.
Ten days later John and I are at a reception, and I see John eating chocolate and crisps, while I don’t. So, I ask him: “Have you changed your plan to lose weight?” “No”, John says, “but these Belgian bonbons are delicious and a few crisps don’t care. I know what I do and I’ll certainly reach my aim”. And so it goes on. John keeps eating too much and too fat, although he is absolutely aware of what he is doing and although he perfectly knows that he has to behave otherwise. Each time he slips up. Although he said then first to himself “Shall I take it, or shall I not?”, most times he cannot resist the temptation, despite my warnings, if I am there. John is fully aware that he behaves contrary to his New Year’s resolution and that each time he can decide otherwise and that it is up to him to stop eating too much. Sometimes he really refuses the sweets and fat food he likes so much. But after a few months, his scales show that he hasn’t lost even one gram and John decides to give up and to take up the plan next year again.
Now I want to ask: Was I free and was John free? Is it enough to say that we are free if we can and do choose from alternatives, although we don’t carry out the decision? Is freedom simply a matter of just deciding, separate from the action that performs the decision? As we see in my cases: It is one thing to take freely a decision and another thing to carry it out. But can we say that I am free, if I am free to choose from alternatives, although my choice has no practical consequences? Decisions are often taken on psychological grounds, but the same is true when we are faced with the task to carry it out. It seems that this applies to the case of John. We can say that John decided and acted freely each time he took chocolate or crisps or ate too fat food. Nevertheless, we tend to say that some psychological mechanisms that fit his personality type made that again and again he took decisions that blocked his New Year’s resolution. But is John so different from me that we can say that these psychological mechanisms made that he wasn’t free and that he was a slave of his psychology, while I am free, because I achieve my aim? Isn’t it so that fulfilling a decision also requires certain psychological characteristics, anyhow?
I’ll not give an answer or a solution here. However, what my cases seem to suggest is this:
It needs more than simply having the choice from alternatives for being free. Freedom is not only a matter of having alternatives but it is also in some way related to the execution of the choice. For calling someone free we need a kind of time perspective, a thing that clearly fails in the traditional analytic view on it.Richard Holton, Willing, Wanting, Waiting, Oxford: Clarendon Press, 2009; pp. 177-8.
Monday, March 11, 2013
The expression “armchair philosophy” is proverbial. As I explained in my last blog it refers to a kind of philosophy that wears an air of not needing a factual basis or, more extremely, to an attitude that confronting ideas or opinions with the facts is an unnecessary effort. In short, it refers to simple homespun philosophy. Nevertheless, much philosophy literally takes place in an armchair and seen that way it is armchair philosophy. An example of it in due form was the well-known television programme “The Philosophical Quartet” broadcast by the German TV channel ZDF: two philosophers (Peter Sloterdijk and Rüdiger Safranski) discussing philosophical problems with two guests while sitting on two coaches without any other assistance than the ideas and opinions in their brains. (I admit: actually I should have to call it “coach philosophy”; see for instance http://www.youtube.com/watch?v=rI7iVPzKb_M) Also Montaigne was an armchair philosopher in this sense. In my last blog I showed a picture of his armchair and desk in the library in the tower of his castle where he wrote his Essays. The difference is that Montaigne often consulted his books or used his personal experiences.
Is this the usual philosophical practice: sitting in an armchair, maybe in your tower, and letting your thoughts wander through a world of abstract and less abstract ideas? Or, if you philosophize with a group, the same process done in several armchairs plus verbal interaction between the thinkers? The wandering of the thoughts through the world of ideas is inherent to philosophy but I discovered that some of the masterpieces of philosophy and brilliant works of the mind were thought up in quite different and sometimes very extreme circumstances.
Maybe the situation where Descartes came to his idea of Cogito ergo sum – I think so I am – is yet close to the kind of armchair philosophy just discussed. Descartes had taken service in the army of Maximilian I of Bavaria. Once he travelled back from the coronation of the emperor to the army and the winter weather forced him to stop somewhere. While he sat there alone in a “stove” (heated room) because he felt cold and he had nothing else to do than thinking, he got the ideas that would determine western philosophy for four centuries. The story doesn’t tell whether Descartes sat in an armchair in his stove, but at least he was not in his familiar surroundings.
Also Erasmus wrote some of his works during his travels, not only during his stays in the inns along the roads but also on the back of his horse. And that is how Erasmus wrote his famous “In Praise of Folly” on his way back from Italy back to England, as he tells in his introductory letter to Thomas More.
Nietzsche, too, laid the foundation of at least some of his works not in his armchair. Because of his health Nietzsche had moved to the Swiss mountains. There he spent a big part of his days by making long walks during which he enjoyed not only the overwhelming nature around him but which he also used for thinking. Nietzsche had always a notebook with him for writing down the thoughts he found valuable. Back home he worked up his notes resulting in what he called his “wander books”.
So, much outstanding philosophy has not been written in an armchair and therefore isn’t literally armchair philosophy. Also Wittgenstein loved philosophizing elsewhere, for instance pacing up and down the lecture-room in front of his students, saying out loud the thoughts that popped up in his mind (which were noted down and later published by his students). But what beats all, I think, is the way he wrote his Tractatus Logico-Philosophicus: he made the notes for this book between the bullets of the First World War and completed it when he was a prisoner of war in Italy. What would have happened with the Tractatus if Wittgenstein had been killed in action?Be that as it may, I must admit that I am only a simple armchair philosopher: I wrote all my books and articles in the armchair on the photo above the blog three weeks ago. Nevertheless, not all my philosophical thoughts developed and still develop there, for sometimes I get my ideas while taking a shower or sitting on the saddle of my bike.
Monday, March 04, 2013
When I criticized “armchair philosophy” in my last two blogs, I meant the kind of philosophizing that looks more like groundless imagination than well-founded reflection. There is nothing against imagination in philosophy, of course, and imagination can be very useful when considering a certain problem or question. What I reject is that assumptions of imagined cases are unrealistic, and this is what often is the case. It can be said that philosophy begins where science ends, and that they are in line with each other. My criticism is then that too many philosophers forget this, which makes philosophy deficient and unprolific in the long run. That’s just why I so often refer to research results in these blogs: In order to give my analyses of who we are and what we do a solid foundation.One point of view sees armchair philosophy as philosophy by “somebody who is a complete know-it-all, usually a douchebag or self-declared intellectual. They always feel the need to seem intellectually superior to others, by continuously arguing about any subject they see in media, conversations, etc. and quoting themselves as experts on the subject.” They stick to their opinions, even when confronted with contrary facts, and they feel a need to comment on everything, even “where careful analysis is needed”. (www.urbandictionary.com/define.php?term=armchair%20philosopher&defid=4816655) If this were the only correct view of armchair philosophy, there would be no place for armchair approaches in philosophy. More relevant here is what the Wikipedia says about it, which sees armchair philosophy as “an approach to providing new developments in a field that does not involve the collection of new information but, rather, a careful analysis or synthesis of existent scholarship.” (http://en.wikipedia.org/wiki/Armchair_theorizing) And that’s what academic philosophers often do: trying to bring progress in the field under discussion by means of intuition, intelligent imagination, theoretical insight, and the like. That’s why, as Timothy Williamson says, “[a] striking feature of the traditional armchair method of philosophy is the use of imaginary examples” (http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0004/1300/Aristotle.pdf). But just this “armchair thinking” has met opposition and led to a new philosophical movement, called “experimental philosophy” or “X-Phi” for short. According to this approach philosophical reasoning must be based on experimental data, philosophical questions can be answered by experimental data, conceptual analysis can be aided by experimental data. But can experimental data lead to philosophical answers, even to that extent that we could better burn our philosophical armchairs? (see http://www.youtube.com/watch?v=tt5Kxv8eCTA for this). I don’t think so. Maybe we can answer some philosophical questions by means of the X-Phi approach, but it doesn’t alter the fact that we need armchair analysis in order to raise the questions that we want to answer in an experimental way, to mention one thing. In a certain sense, it can be defended that actually all philosophy is armchair philosophy. Nevertheless, the X-Phi approach has a point, and as so often, the truth is somewhere in the middle, I think. Albert Einstein, one of the biggest geniuses that ever lived on earth was typically an armchair scientist. But weren’t also Einstein’s conclusions to a large degree founded on the analysis of experimental results? The same must also be expected from armchair philosophers: At least that their analyses and argumentation have a sound factual basis. Otherwise they will result in mere speculation and fantasy. Also an armchair needs a floor to stand on.
Monday, February 25, 2013
Terlassie passing the finish
Some time ago I criticized in a blog that philosophers using thought experiments often don’t realize that the assumptions of such an experiment can push the answer in a certain direction. In my last blog I criticized that in thought experiments the context often is left out, although it can be highly relevant for what we want to show in the experiment. Maybe there are more mistakes in thought experiments that I failed to notice. Who knows, for I have never made a systematic study of the subject. These were just two flaws that caught my eye.
Even so, I don’t want to say that thought experiments are useless but only that they bear the seed of misrepresentation within them and that they can be misleading. For there are also a lot of interesting and important philosophical thought experiments. And, to be honest, isn’t it just fun to think up a good one and to tease your mind with it? Isn’t just that one reason why we philosophize? One of the most famous thought experiments has laid even the foundation of modern western philosophy: Descartes’ evil demon (Descartes wondered whether his thoughts weren’t misled by a devil; or, in other words, whether his senses did not give him a complete illusion of the external world. He concluded that anyway his thinking activity could not have been misled). Used with insight, thought experiments can bring us a step forward or make us things clear. So, the British philosopher Elizabeth Anscombe used one for showing that in concrete situations it is not possible to delimit a person’s actions: What an action is is a matter of perspective. When I flip the switch, do I turn on the light in the room or do I warn the thief in my house? (the example is Davidson’s) If the thief left no traces and took nothing with him, I’ll never get the idea to use the latter description but only the former. This thought experiment shows also that actions can have side effects.
I think that in situations where assumptions and context play no fundamental role, thought experiments can be appropriate. That’s also the case when they are used for undermining arguments. This makes Searle’s Chinese Room thought experiment so strong and to the point (http://en.wikipedia.org/wiki/Chinese_room). In this way, once I have used a thought experiment for refuting the idea that the person only goes where the brain goes and that brain and body can be separated, as is implicit or explicit in many theories on personal identity in the analytical philosophy, the so-called psychological identity theories (like the one defended by Parfit). Here I’ll present it in a new version:Two marathon runners, Haile Gebrselassie and Paul Tergat, have switched bodies, so that the brain of Gebrselassie and the body of Tergat belong together and the other way round (let we call them Gebregat and Terlassie respectively). They take part in the same race, but Gebregat leaves the race injured while Terlassie wins. Then, since the person goes where the brain goes according to psychological identity theorists, it is Paul Tergat who has won the race, although it was Haile Gebrselassie’s body that passed the finish line first and although Paul Tergat’s body could not withstand the strain of the race and even didn’t finish. So, if we may believe the psychological identity theorists it is not the body that runs but the brain. For how else could it have been that Paul Tergat had won?
Monday, February 18, 2013
Philosophers think out all kinds of theoretical situations in order to discuss and answer their philosophical questions. However, it often happens that such a thought experiment starts from assumptions that push the answer looked for already into a certain direction. Thought experiments in which people switch brains are like that: how can we theoretically discuss brain switches and come to acceptable conclusions if we ignore factors that make such brain swaps impossible in practice? (see my “Can a person break a world record?” on http://home.kpn.nl/wegweeda/PersonalIdentity.htm ). An article by Allen Wood (“Humanity as End in Itself” in Derek Parfit, On What Matters, Vol. Two, pp. 58-82) drew my attention to another factor that is often left out, although it is usually relevant for the problem at hand: the context. In what follows I am greatly indebted to this article.
Two much discussed thought experiments in philosophy (also by Wood and Parfit) are “Sidetrack” and “Footbridge”:
Sidetrack: A driverless, runaway trolley on a railway is heading for a tunnel, in which it would kill five people. As a bystander, you could save their lives by turning a switch and redirecting the trolley on to another track. However, there is a man walking on that track that would be killed instead of the five.
Footbridge: A driverless, runaway trolley on a railway is heading for a tunnel, in which it would kill five people. You are standing on a footbridge above the track. You are slim and short but a large man is just crossing the bridge. If you jump on the track, you will be run over by the trolley, which will kill you and the five people as well. If you push the large man on the track, he will be killed but the trolley will stop and the five will be saved.
Most people will say that it is permissible that you turn the switch in Sidetrack but not that you push the man in Footbridge. One explanation for this difference is that it is impermissible to intentionally cause harm as in Footbridge, but permissible to cause harm as a foreseen but unintended consequence of one’s action as in Sidetrack.
Whatever the explanation is, one can wonder how people would react if
- the five people are walkers who want to take a short cut but are not allowed to walk in the tunnel
- the five people are copper thieves stealing railway copper
- the single person is a railway worker doing his job.
- it’s not you who have to take the decision but a mentally weak person who often takes wrong decisions.
- the large man in Footbridge is an escaped murderer (sentenced to death, if that makes a difference to you).I can give my thoughts free rein and add more situations or I can combine them. However, I think that one thing is clear: what you’ll do and what you’ll find permissible will depend on the situation. It has no sense to strip off the context and then in the abstract tell what is right, or, in other thought experiments, what we’ll do. It’s the context that makes what is acceptable or right, and this context is often more complicated than we can imagine in our philosophical armchair.
Monday, February 11, 2013
The Greek new it already: “A healthy mind in a healthy body”.
(entrance sign stadium of Sparta, Greece; see text circled in red).
(entrance sign stadium of Sparta, Greece; see text circled in red).
Two years ago I have written a blog titled “Running with my mind”. It was about how we can improve our physical condition by simulating physical exercises in our mind: Simply by thinking that we do exercises our fitness increases. But how about the other way round? Does exercising influence our mental condition? Actually, I knew already that such an inverse relation exists but that was all. The theme sank to the bottom of my mind and I forgot it. But last week, I saw a little article in the science supplement of a newspaper saying that sportsmen have better cognitive functions and thicker cerebral cortices than students. I wanted to know more about it. For if it were true, I should beat them all, for I am a sportsman and a lifelong student, as my regular readers know. So I started to google the theme and what I found was very encouraging and enjoyed my mind and I should have immediately taken my running shoes, if I hadn’t had to write this blog first. It’s too much to summarize here all I found but one thing is clear: The best way to improve and strengthen your brain and your cognitive functions is not thinking, doing mental games or living in a stimulating environment, but it is running, cycling or any other aerobic bodily activity. For instance, in an experiment mice were divided in several groups. Some got special food; another group lived in a stimulating environment; a third group did nothing special; and the fourth group got running wheels (where mice enjoy exercising) and nothing more. Afterward the last group performed best when given cognitive tests. Other experiments showed that mice that were forced to work harder by using a treadmill performed better than mice that got a simple running wheel. And don’t tell me that this concerns only mice. Experiments with human test groups show the same results.
One reason why it works is that aerobic exercise stimulates the blood circulation in the brain. But there is more. Exercise helps also build new brain cells and extend neural networks. But you might reply: mental training will do this as well. That’s true, but there is a difference. Brain cells formed by mental exercise are specialized. They are only good in performing the task they were made for. However, brain cells formed by aerobic exercise are multifunctional. They are not only apt for making you run but are also for other, cognitive tasks.
And all this is not only for young people. The older brain profits also from bodily exercise, and then it doesn’t need to be running, but walking and cycling will do as well. Such aerobic exercises make the older brain younger, and slow or even reverse the decay of the brain and the occurrence of serious mental illnesses like Alzheimer.
I could mention many more experiments, but I would like to finish with this one, which I quote from a blog in the New York Times (see below for the link):
“21 students at the University of Illinois were asked to memorize a string of letters and then pick them out from a list flashed at them. Then they were asked to do one of three things for 30 minutes — sit quietly, run on a treadmill or lift weights — before performing the letter test again. After an additional 30-minute cool down, they were tested once more. On subsequent days, the students returned to try the other two options. The students were noticeably quicker and more accurate on the retest after they ran compared with the other two options, and they continued to perform better when tested after the cool down.”
So, when you want to learn something new, as a student or for another reason, want to do a complicated mental task, or are afraid to forget something, and the like, just take your running shoes or your bike, and you’ll become smarter.
There are many websites that describe the results that I mentioned here, but this one gives a good overview: http://www.nytimes.com/2012/04/22/magazine/how-exercise-could-lead-to-a-better-brain.html?pagewanted=all&_r=0For the quotation: http://well.blogs.nytimes.com/2009/09/16/what-sort-of-exercise-can-make-you-smarter/. Look there also for the original source.
Monday, February 04, 2013
Recently I argued that the idea that our meta-thoughts can influence the way we think and by means of that our behaviour can be undermined by the “third factor counterargument”: a piece of behaviour and a conscious thought that seems to trigger it can both be caused by a third factor that makes both the behaviour and the thought happen (see my blog dated Jan. 14). So my thought to go to a bookshop in Utrecht tomorrow and my actually taking the train then in order to go there may both be caused by me watching a book program on TV now. It’s not unlikely that in this instance it’s true, although I think it’s not as simple as that. Anyway, whatever may be the case, it is a practical problem that against any sound scientific argument or theory always another equally sound argument can be brought forward that seems to refute it. Actually it’s the base for scientific progress, but on the other hand how far do we go? If we can fundamentally refute everything, only cynicism remains. So I think that now and then we must show determination and say: This is what I think that is true and this is what I want to defend. Even though we know and accept in our heart that everything can be falsified.
This is what I thought of when I put forward somewhat reluctantly the argument that a piece of behaviour and a related thought can be caused by a third factor. For actually I think that in some way our conscious thoughts do cause – or influence at least – our behaviour. Especially Baumeister and his colleagues have analyzed many studies in this field and defended the view that it is quite likely that such a causal relation exists. I think that their arguments are convincing, keeping in mind, of course, what I just said about possible falsification. Here I don’t want to summarize their analysis or repeat their arguments (see the reference below for that). However, I think that it is interesting to list their “four broad conclusions”, as they call them, about how consciousness influences behaviour. Here they are:
1) Conscious thought integrates behaviour across time. It “is helpful for enabling present or imminent behavior to benefit from past and future events, and for present and recent events to influence future behavior”, as Baumeister et al. put it. Planning is an example of this.
2) Conscious thought relates social and cultural factors and the individual’s behaviour. It mediates sharing information with and understanding other people and dealing with the human world we belong to. Negotiating is a case in point.
3) Conscious thought helps to choose in situations of several alternative possible forms of behaviour. It helps to deviate from the road we would take if we would follow the automatic pilot within us. Again I could mention here negotiations or also when we want to buy something as simple examples.
4) In fact, everything we do is a mixture of conscious and unconscious processes. Therefore, many apparently exclusively unconscious pieces of behaviour have a conscious component. Baumeister et al. mention here giving instructions and focusing attention as instances where the conscious part is overstressed but certainly there are cases where it is the other way round. A division into conscious and unconscious behavior seems to be a false dichotomy.
In view of these four points, the idea that conscious thinking is a mere epiphenomenon is quite unlikely, even though it still remains possible that somebody will come out with factors that might explain both our behaviour and our thinking about it as processes that are not immediately related. Or they argue that our thinking is simply the steam of the whistle of the machine within us (see Thomas Huxley, for instance): It shows that there is activity in our body but it doesn’t causally make it move, anyhow.
Source: Roy F. Baumeister, E. J. Masicampo, and Kathleen D. Vohs, “Do Conscious ThoughtsCause Behavior?”, http://carlson.umn.edu/assets/165663.pdf.
Monday, January 28, 2013
Reading Parfit’s On What Matters, I came across this passage:
“Turn next to lying. Herman writes that …
‘… Universal deception would be held by Kant to make speech and thus deception impossible.’
Korsgaard similarly writes:
‘lies are usually efficacious in achieving their purposes because they deceive, but if they were universally practiced they would not deceive...’
But no one acts on the maxim ‘Always lie’. Many liars act on the maxim ‘Lie when that would benefit me’.” (Vol. One, p. 278; my italics)
Monday, January 21, 2013
Juniper bushes and grave mound
The bike ride wasn’t to be philosophical but historical, or rather prehistorical. Instead of making the obligatory Sunday afternoon walk to the centre of the town when we were there in that little provincial capital, I proposed my wife to make a bike ride through the fields and woods east of the town. I knew there a few interesting sites and I wanted to take photos.
So a few minutes later we were cycling along the street that leads to the park where once a manor had been. What had been left of the house had been torn down long ago and only a tomb remained.
Arrived in the fields we passed a farmhouse with a striking architecture not typical for the region. We crossed a brook and turned left. The centuries old farmstead had gone. It had become a victim of arson, just after it had been restored. Nobody knows what the reason of this act was. We followed a muddy path, trying to avoid the puddles and pools, and suddenly I saw what I had come for: a grave mound, there in the field. As such it is nothing spectacular but the idea that people had built it millenniums ago for honouring their dead and that it still was there … And then, in the wood behind the field many more: dozens of grave mounds that had withstood the ages.
The toadstool-shaped signpost showed that we had to go left. Again fields, again a little wood and muddy roads. A fence indicated the border of the nature reserve and archeological reserve. It was a place where I loved to come and play as a little child, with my parents. Later, when I was older, sometimes I made there a bike ride after the classes and before I started to make my homework. Nothing had changed since then. Only the fence was new.
We put our bikes against a tree, opened the gate and walked to the heather field. Not just a heather field but one of the few places where you could see juniper bushes. And in front of us the remains of prehistoric farmlands. With some effort you could still see the low embankments that once separated the parcels. Who were the people who had lived there and had struggled to survive on the very poor soil? Where did they come from and where did they go?
When we followed the path to the right again some grave mounds, rather high. The places where these petty farmers had been buried? Or only their leaders? Or maybe they were quite rich then? And what did these people think and think about? But the dead don’t talk anymore so we’ll never know.Before us a marsh with a mere stretched out. Somewhere behind the trees on the other side there was a dolmen. I took my photos. Then we cycled back from prehistory to history. To the left we saw what remained from the low rampart raised for protecting the tent of a military minded bishop who had attacked the region. In vain. Returned to the present the coffee was waiting for us.
Monday, January 14, 2013
Actually it’s an intriguing idea that people can think consciously, or that they can “think” for short. Some scientists believe that thinking is merely an epiphenomenon. From this point of view, it would make no difference, whether we would think or not: we would behave in the same way in both cases. I don’t endorse this viewpoint, but here I’ll not discuss the arguments pro and con. Others scientists take the view that our thinking does cause or at least does influence our behaviour. This idea seems more plausible to me, but here I’ll pass over this viewpoint, too. However, even if the idea that thinking is a mere epiphenomenon is true and man would be a very complicated kind of machine (in the way Descartes thought that animals are), it remains intriguing: For who has ever heard of a man-made machine that thinks? Apparently man is more than just a construction of nuts and bolts that fasten a physical structure.
What I find even more intriguing than the idea that man can think is that man can think about thinking. In my last blog, I have given an example of such “meta-thinking”, when I wondered whether a certain thought of mine was a case of cognitive dissonance reduction or whether I “really meant” what I thought.
Scientists are divided over whether thoughts can influence the behaviour of the thinker. But how about meta-thoughts? Take this example from my last blog: You always wanted to buy a yellow car, but in the end you buy a grey one, because the dealer had only this colour in stock. You think: “Actually a grey car fits me better”. Then you realize that you are reducing a cognitive dissonance and you change your opinion: “A grey car doesn’t fit me better. I wanted a yellow car, but the dealer did not have it in stock. I had no choice, but I still prefer a yellow one”. In this case you had a meta-thought, but it had no influence on your behaviour. If your thinking is epiphenomenal, than your meta-thinking is as well.
Is this always so? I can take the study by Festinger again for showing how meta-thinking might influence behaviour, but actually thinking about thinking in order to influence our behaviour is something we often do. For instance, you have to do an exam on a theoretical subject. Your traditional strategy is to learn all the stuff by heart by repeating the required reading so often that it becomes stored in your brain. Then your teacher tells you that another good method is explaining the subject matter to someone else. You decide to test the method and you ask a friend to be your audience. By doing so your thoughts about how you think have changed your behaviour. In this way our meta-thoughts often change our behaviour.
The case just described seems to substantiate the view that our meta-thoughts can influence the way we think and by means of that our behaviour. If so, it will not be difficult to prove that thinking can directly cause behaviour as well. However, we cannot exclude the possibility that some unconscious process in the brain is triggered by what the teacher told and that it is this unconscious process that both changed the learning behaviour and produced the meta-thoughts about learning. Then meta-thoughts are adaptations to what you actually do, just in the way as reducing a cognitive dissonance is a way to make thoughts and facts fit.
Monday, January 07, 2013
In my last blog I wrote about the theory of cognitive dissonance. Say, we expect that the world will be destructed on December 21, 2012. However, the prophecy does not come true and two weeks later the world still exists. We feel quite ill at ease and we try to understand what went wrong. We think: A supreme being has given the world a second chance. Therefore we try to convince the people around us that the world can be saved. According to the theory of cognitive dissonance, we try then to reduce the dissonance between our expectation and what actually happened.
Suppose now that I am waiting for the train of 10.05 a.m. to Utrecht, where I’ll have an interview for a job. However, the train doesn’t come nor does the next one fifteen minutes later. So, I call the Railway Information Service. The telephonist tells me that there is a power breakdown and that there’ll be no trains for some hours. Next I call the selection committee that I’ll be too late, since I have to take my car.
What’s the difference? When you don´t belief in the prophecy, you’ll probably say: In the first case, the facts are adapted to the belief and in the second case the belief is adapted to the facts. Or something like that.
That’s clear, you might think. Is it? Take these examples:
- Many years ago I took part in a 5K track race (running). One of the other participants was a friend of mine. I finished the race in a good time but my friend left the race already after two laps. “I wasn’t in the mood”, he told me, although it took hem three hours of preparation to start, for the race was in another town. Do you believe him? I think that my friend himself believed what he said, but I didn’t, for he would be the first to stop for such a reason.
- You want to buy a new car. You always said: “When I buy a new car, it must be a yellow one, because not many people have that colour.” However, the salesman tells you that you have to wait two months for it. Because your old car actually needs repair, you don’t want to wait so long and you choose a grey one of the same type. Later you say to yourself: A grey one fits me much better. Everybody would recognize me from far and say: “There’s John with his yellow car.”
- Leon Festinger and James Carlsmith asked a group of students to perform a boring task. The experiment was in fact more complicated, but the essence is this: After having performed the task, the students were asked to explain it to other people and to tell them that it was very interesting. Half of the students got one dollar for this job and the other half got twenty dollars. When interviewed, the latter told the researchers that the original task was boring, while those who got one dollar said they liked it. Apparently, receiving twenty dollars this was a good excuse for telling a lie. However, the students who received only one dollar had a mental problem: The low payment did not compensate the psychological burden lying. So they got the feeling that the original task was interesting.
Without a doubt, I could have chosen better examples, but what I want to say is this: Often we invent reasons that fit the facts after they have taken place. Moreover, there is no fundamental distinction between reducing a cognitive dissonance and giving a “real” explanation. Or rather, the extreme cases are clearly different and in case of a cognitive dissonance the facts are adapted to the belief whereas in case of “real” explanations the reasons are adapted to the facts. But between these extremes, the reasons can be more a bit of this or more a bit of that. There the difference is actually gradual and reducing a cognitive dissonance is something everybody often does to some extent. Something happens that we did not expect or did not want to happen and we have to act or form an opinion. So we rationalize. However, this doesn’t imply that we throw dust in our own eyes. This may happen but often our reasons are good reasons.
Since I have heard of the theory of cognitive dissonance I often think: Is this thought of mine a case of cognitive dissonance reduction or do I really mean it?
Monday, December 31, 2012
Sour grapes: Wasn’t it Aesop who had invented the theory of cognitive dissonance?
Actually I didn’t want to write about the nonsense of the end of the world. It isn’t worth to give it so much attention, and I agree with the Russian president Vladimir Putin (probably the first and the last time that I’ll agree with him): The end of the world will be in about 4.5 billion years’ time. But the event made me think of a study by Leon Festinger and his co-workers I learned about when I studied sociology long ago: When Prophecy Fails (first published in 1956). It was rather new then when I attended my lectures.
In this book the theory of cognitive dissonance is described for the first time. The details of the study and the theory can easily be found elsewhere on the Internet, but the essence is this: Members of a small sect somewhere in the USA think that the world will be destructed by a Flood but that only they will be saved (by a UFO). On December 21 the believers meet at a pre-determined time and place but nothing happens. Although before the presumed date of the end of the world they avoided publicity, now the believers think that the world has got a second chance and they dramatically increase their activities of spreading their message to the world.
What did happen then from a psychological point of view according to Festinger and his co-workers? Before the final date the members of the sect have a certain belief about what will occur. However, the belief doesn’t come true, for the world hasn’t been destructed as prophesied. Therefore there is a discrepancy between the original belief and the facts. Festinger et al. call this a “cognitive dissonance”. Such a dissonance is considered an unpleasant experience by most people, so they want to get rid of it. In the words of Festinger et al.: The cognitive dissonance has to be reduced. Therefore the believers of the destruction of the world think that there is a reason that the world has been saved (“the world gets a second chance”) and they adapt their behaviour to it (in this case: they try to make converts). The result of the new interpretation of the belief and the adaptation of behaviour is that the gap between belief and fact (so the cognitive dissonance) is psychologically reduced.
According to the original theory the reduction process is unconscious. Moreover, it is not limited to sectarian believes and behaviour. Actually the reduction of cognitive dissonance is something everybody often does if there is a discrepancy between a belief, attitude, values, norms etc. and the facts. It is a common psychological mechanism. Later the theory has been changed in the sense that reduction can also happen consciously.These were some of my thoughts when I heard all the fuss about the supposed end of the world because the Maya calendar ended on December 21st last (It’s interesting that the Mayas themselves had a different interpretation of what this meant). This case is unlike the one analysed by Festinger et al. in so far as then the believers avoided publicity before the predicted end of the world, while now the predicted fact received already much attention before it should take place. Anyhow, I have some questions. What will the real believers do now that Doomsday did not take place? Will they flood the world with a new interpretation of their sectarian belief and with a new Doomsday prophecy? Moreover, what progress will the study of this failed prophecy bring to the social sciences and especially to psychology? I am waiting for what will happen.
Monday, December 24, 2012
Lith, the Netherlands: Photo with pinhole camera
There is no good philosophy but only philosophy that is not bad. This was the conclusion of my last blog. But how about photography, for instance? It is often said: With these modern digital cameras everybody can make a good picture. And although we know that it is an advertising slogan, many people belief it’s true. For isn’t it so that by simply pressing a button, nowadays we can make pictures that are sharp, well exposed, and thanks to the newest techniques, taken just at the moment that everybody is smiling? What more do we want in a good photo? Okay, you need to keep your camera straight, but Photoshop or another good program can solve it, in case you forgot it. So why do we still need photographers? As a result it has become increasingly difficult to make a decent living of photography. Another consequence is that the quality of photos in newspapers and magazines is often low. But it’s strange: on the one hand there is no accounting for taste, so seen that way, you can’t say: This photo is good, that photo is bad. All criteria for quality in art are subjective, aren’t they? On the other hand, people say: This photo is better than that one. How can they say that, if there are no objective quality criteria? Apparently, there are bad photos and photos that are not bad, just as there is bad philosophy and philosophy that is not bad. However, good and bad can have two different meanings here: It can mean technically good or bad, or it can mean good or bad with respect to its contents (and maybe we can apply this distinction to philosophy as well). The former refers to aspects like sharpness, exposure, and other “technical” aspects. The latter is what the image on the photo represents and how it is composed. A good photo tells a story, for instance, or we call the image beautiful, intriguing, or having a good likeness.A photo that is good in the first (technical) sense need not be so in the second sense (concerning its contents), and that’s what we often see. But the other way round? Needs a photo with a “good” content also be technically good? In the past it was generally thought that a technically bad photo could not be good, anyway, but why should it be so? I always say: A photo is good if it represents what it is supposed to represent. A feeling need not be sharp but can also be blurred, by way of speaking, and that must be in the image. If we wanted to make a picture of John and it shows John, in fact it is a good photo; other aspects are secondary. This is striking when I present photos on an art market or in an exhibition. When I show sharp and otherwise technically good photos next to photos taken with a pinhole camera, which are a bit blurred, because such a camera has no lens, then the pinhole pictures draw the attention of the visitors, and less so the technically goods ones, even when both types of photos have basically the same contents. Obviously there is more than just good or bad in photography. Let’s call it expression or feeling, the way we look at it. Indeed, there is quality in photography – I’ll certainly not say there isn’t – but I think it is not about good and bad but it is rather a matter of seeing and perspective.
Monday, December 17, 2012
Plato: A not bad philosopher?
What is good philosophy? What is bad philosophy? These questions occurred to me after having disproved the Lottery Paradox. For how can it happen that a thesis like the Lottery Paradox persists so long, while in my opinion it is so easy to refute? Is it really such a bad kind of philosophy as I think it is, or does it have strong points as well? Since I do not have a thorough formal philosophical training, because it was another route that led me to philosophy (which is not unusual for philosophers), I cannot fall back on theoretical insights or procedures that I had learned during my education, nor do I have such books. What I did therefore is what most people do today, I think: I googled my questions. However, it didn’t help me for I found a lot on the philosophy of the good and the bad but nothing about what good or bad philosophy might be. The only thing I found was that philosophy must not be inconsistent, but that’s obvious, I should say. Moreover, inconsistency may be a criterion for bad philosophical reasoning but consistent reasoning is not good just for that. It would be bad philosophy to contend the latter, since there are other factors that can make an argumentation wrong even if it is consistent. This thought is in line with Karl Popper’s brilliant idea that fundamentally it is possible to refute a theory, but that it is never possible to prove it. If this idea is applied to my questions, it means that one cannot say what good philosophy is, although one can say “that is bad philosophy”. Or rather, one can say “that is a bad philosophical argumentation”. Then one comes into the fields of argumentation theory and methodology and their rules. Or even more, then applies what Paul Feyerabend says: “Anything goes”, namely that any argumentation, also non-standard, that undermines another argumentation makes the latter a bad one (basically, for the thesis is founded on certain suppositions, like that the former reasoning is correct).
Does this mean that we can say nothing about what good philosophy is, but that we can say only that a case of philosophy is “not bad”? By chance, recently I received a contents alert from a philosophical journal that drew my attention to the article “Bad Analytical Philosophy” by Pascal Engel. The first sentences read: “Most analytic philosophers agree that good philosophy ought to satisfy certain minimal requirements: it should be clear, precise, well argued, putting forward an explicit thesis and exemplify the principle that truth emerges more readily from error than from confusion. Everyone agrees that it should be also interesting, relevant, reasonably original, rigorous, and that it should advance theoretical or critical proposals on the problems and puzzles which have shaped the analytic tradition or which are the object of current concern. Many philosophers are confident that when these basic desiderata are met, analytic philosophy cannot be bad. Nevertheless we all know that there is bad analytic philosophy.” And I want to add here: what is valid for analytical philosophy is valid for philosophy in general as well. However, in the light of Popper’s idea that we cannot positively prove a theory, that good philosophy cannot be guaranteed when we follow the requirements listed by Engel. These requirements can be guide lines at most. They’ll never reach the status of criteria that make philosophy good when strictly applied, although it will be possible to lay down criteria that make philosophy bad (even if these will not be exhaustive).
Where does this get us? The upshot is that there is no good philosophy, or rather: logically we cannot say that a piece of philosophy is good but only that it is not bad at most. But maybe this is a case of bad philosophy.Source: Pascal Engel, “Bad Analytical Philosophy”, in Dialectica Vol. 66, N° 1 (2012), pp. 1–4: p. 1
Monday, December 10, 2012
Two blogs ago I wrote about the Lottery Paradox. I showed that it was false. However, it stayed straying through my mind, not because I had my doubts whether it was really false, for it simply is. But I wondered what went wrong with the paradox and why it is still seen as valid by some. Well, I cannot give an answer to the latter, but I can say something about the former. This time I shall be less abstract and formal, so that those readers who got stuck halfway two weeks ago, will now keep hanging on my lips.
The Lottery Paradox says that we can argue that no ticket will win in a lottery, although certainly one ticket will do, if the lottery is fair. What went wrong in this reasoning besides that the statistical argument isn’t correct? I think that the essence of the failure is in the first basic principle. It runs, as you’ll remember: “If it is highly probable that p, then it is rational to believe that p.”
The central concepts in this principle are “probable” and “rational”. But what do these concepts mean? In the argument that is supposed to substantiate the Lottery Paradox they are not explained. I think that this is the real reason that the argument goes wrong. Let’s look first at “probable”. In the context of the paradox it has a double meaning. First it is treated as a psychological concept but next as a concept from the probability theory (or from statistics). The first principle of the Lottery Paradox says something like this: If it is very likely that p will happen, you can suppose that it really will, even though sometimes it doesn’t. For instance: The timetable says that the next train will leave within 15 minutes, and since the timetable is usually correct, I can better go to the station now (even though it may be possible that the rain will be too late this time). But then, in order to “prove” the Lottery Paradox, “probable” gets suddenly a statistical meaning, and then the argument is false, as I explained in my blog two weeks ago. This doesn’t alter the fact, though, that the psychological interpretation makes sense in our daily life.
There is also something wrong with the way the concept of “rational” is used in the “demonstration” of the paradox. What is rational depends largely on the situation where we have to act. Take the train example again. Suppose that I want to do some shopping in a town nearby. So, I think: “I must leave home now, although it might happen that the train doesn’t leave within 15 minutes, because it will be late or because the timetable has changed”. Then it’s rational to go now and not to check possible changes on the Internet in case there is a train every 15 minutes. If I am wrong, the consequences are negligible.
Take now this example from my blog two weeks ago: I work as a security officer on an airport where I check the passengers at the gate. Say every year ten million passengers pass this airport and only once in five years someone is caught who might have the intention to put a bomb in a plane. Therefore, it is highly likely that the next passenger is a decent person and not a terrorist. Must I say then: Well, it is very, very likely that the next person is not a terrorist. I am a rational person and I don’t check her? Of course not, for in view of the consequences in case she is, it is better to check her, and the next passenger, and the next … Here it is rational not to believe that p, even if it is extremely probable that p.Don’t define your concepts and you can get any conclusion you like.