Knowledge can become out of date, because what we thought to know appeared not to be true. The case of Galileo that I described two weeks ago is an instance of this: the sun does not turn around the earth, as people thought in his days, but it is the other way round, as Galileo showed. So what we thought to know has never been the case. Actually, it has never been knowledge: it was false knowledge. But is it so that all supposed knowledge that later appeared to be false knowledge always has been false knowledge? Take the so-called bystander effect, the phenomenon that most persons do not help a victim in an emergency situation (for instance a drowning person), when other people are present, while they would help, if they were there alone. However, now I read in an article that it has become difficult to replicate the bystander effect in an experimental setting. Apparently people have changed – maybe because the phenomenon got much attention in the media and in publications – and the bystander effect does no longer exist: people do help now in emergency situations, even when they are in a group. What once was true has now become false and has been replaced by new knowledge.
On the face of it, it seems here that a piece of knowledge simply has become out of date and has been replaced because we know better now. However, there is an important difference with the Galileo case. There the original idea that the sun turns around the earth has always been false, but the bearers of this supposed knowledge were not acquainted with this. However, the bystander effect has not become out of date because it never has been true, for once it was. It has become superseded because reality has changed. This is typical for the social sciences: people get knowledge of certain social facts, which are true at the moment they hear about it. However, for one reason or another, people are not satisfied with the facts and they change them. Then old knowledge becomes the foundation of new knowledge instead of being falsified. By the way, it can also happen that what once was false is made true: a teacher undeservedly thinks that some students in his class are better than other students and just because of his – often unconscious – behaviour the allegedly better students also become better, as psychological research has shown: false knowledge has turned into true knowledge. And in fact, the reversal of the bystander effect is also a case in point: the false knowledge that people tend to help in emergency situations even in case others are present, has become true when people became conscious of their behaviour.
This interpretative effect (called “double hermeneutics”) does not exist in the natural sciences. However, also there it can happen that knowledge becomes outdated and has to be replaced by new knowledge without being falsified. Old medical knowledge is often replaced by new knowledge, not because it has been falsified, but because now we know better. But in a certain sense the medical science is a social science. The latter is not the case for biology and for the biological world, for instance. Nature is in continuous development and what once was true about it, is not valid anymore many years or ages later. Even if our knowledge isn’t true, it needn’t be false. There are many ways that it can appear to be wrong.
When we know something, or at least think so, do we really know it then? Can it be that we on some occasions know a thing and on other occasions we do not know it, even though we haven’t forgotten it and can tell exactly what it is that we are supposed to know? When browsing on the Internet, I found this interesting case by Keith DeRose, which I quote from Nestor Ángel Pinillos “Some Recent Work in Experimental Epistemology” (Philosophy Compass 6/10 (2011): 679):
 My wife and I are driving home on a Friday afternoon. We plan to stop at the bank on the way home to deposit our paychecks. But as we drive past the bank, we notice that the lines inside are very long, as they often are on Friday afternoons. Although we generally like to deposit our paychecks as soon as possible, it is not especially important in this case that they be deposited right away, so I suggest that we drive straight home and deposit our paychecks on Saturday morning. My wife says, “Maybe the bank won’t be open tomorrow. Lots of banks are closed on Saturdays.” I reply, “No, I know it’ll be open. I was just there two weeks ago on Saturday. It’s open until noon.”
 My wife and I drive past the bank on a Friday afternoon, as in , and notice the long lines. I again suggest that we deposit our paychecks on Saturday morning, explaining that I was at the bank on Saturday morning only two weeks ago and discovered that it was open until noon. But in this case, we have just written a very large and very important check. If our paychecks are not deposited into our checking account before Monday morning, the important check we wrote will bounce, leaving us in a very bad situation. And, of course, the bank is not open on Sunday. My wife reminds me of these facts. She then says, “Banks do change their hours. Do you know the bank will be open tomorrow?” Remaining as confident as I was before that the bank will be open then, still, I reply, “Well, no. I’d better go in and make sure.”
The first case is clear: I know that the bank is open on Saturday morning and I behave like that. But how about the second case? If I am sure and know that the bank is open on Saturday morning, there is no need to check it. However, although I am confident to know it, nevertheless I check it. But there can be only reason to check whether the bank is open, if I do not know it or when I am not sure enough about it so that I can say “I know it”. Pinillos discusses then two possibilities: either knowledge is dependent on the context or, although knowledge as such is true, it is sensitive to stakes, i.e. “the idea that whether an agent who believes P also knows P may depend on the practical costs (for that agent) of being wrong about P: when the stakes are high, the epistemic standards for attaining knowledge may be higher.”(676) Pinillos rejects the first possibility. However, I think that there is at least one other possibility, namely that there are degrees of knowledge, an alternative that Pinillos does not discuss. Also then, just as Pinillos says, whether we are going to check our knowledge depends on what the consequences are when we are wrong. But we don’t check when we are for 100% sure. If that were so, we should have to continue checking and checking after each check, for the consequences in case we are wrong haven’t changed after each check. So why would we stop checking? But after the first one our knowledge has increased and we feel sure and that’s why we stop and say: we know it.
I do not know whether you are acquainted with what is happening in the academic worlds in other countries, but recently in the Netherlands social psychology professor Diederik Stapel was dismissed because he had fabricated research data; not only once but so often that his university decided to report it to the police. In Germany, defense minister Karl-Theodor zu Gutenberg had to resign because it came out that his PhD thesis was full of plagiarisms. Since then more such cases have been discovered both in Germany and in the Netherlands. So, inside and outside our ivory tower of science and the humanities not everybody appears to be as original as s/he pretends.
I do not think that I am original in most of my blogs here, but I do not pretend to be so and at least I mention my sources, as every reader can check, and lack of originality is not a crime. But when I considered my recent blogs on knowledge again today, I had to think of the case of this Dutch ex-Prof. Stapel. Let’s suppose that looking for inspiration for my blogs I read an article by Stapel and, since his falsifications had not yet come to light, I had good reasons to think that the research in the article was real and that the data were correct. Also the argumentation in the article was okay, so that I could endorse his conclusion that A was the case. Therefore, given my definition of knowledge as methodically justified interpreted belief (see my blog dated Nov. 14, 2011), one could say that I knew then that A was the case. But, in view of what we know now about Stapel, can we still say that I then knew it? I think so, for then the conclusion was methodologically justified for me, and for many other people too, although not for ex-Prof. Stapel. However, now it is no knowledge any longer. Does this now mean that afterwards I have to change the idea that then I knew it?
One can defend that the “knowledge” in Stapel’s article has never been any knowledge at all, but in a certain sense what happens here is not so different from what normally happens in science, apart from that normally knowledge is not fabricated. For instance, we have an idea about something in reality, like that on average poplars are higher than oaks. We gather data in order to test the idea, for example by measuring 100 mature poplars and 100 mature oaks around here where I live and comparing the average lengths of both. Then we can say that we know now that on average poplars are higher than oaks. But usually things are often not as simple as that. In the days of Galileo most people thought that the sun turned around the earth, but Galileo showed that it was the other way around. So, did we get a change in knowledge? However, before Galileo people had good and sincere reasons to think that they knew that the sun turned around the earth, and this knowledge hadn’t been fabricated, so if you asked someone what s/he knew about the earth and the sun, you got the answer “the sun turns around the earth”. And today we are in the same situation: We think that we know a lot and probably we do, but for every piece of knowledge it is quite well possible that sooner or later someone will say: well, I developed a new research method which is better than the old ones (just as Galileo used a telescope for studying the sky, which was new in his days) and I have applied it and my conclusions are different. The upshot is: fabricated knowledge is false, it’s true (unless by chance the fabrication happens to correspond to reality), but it is not so that what we sincerely and in a methodically justified way think to know is true, even not for us, as the Galileo case shows. Only the chance it is is bigger. And to enhance this chance, that’s what science and the humanities are about.
When we make a mistake, we regret it and we try to correct it or, what often and maybe more often happens, we try to conceal it (which is as human as human is). That’s okay – I mean the regret, of course – and that you want to do it better is inherent in the meaning of the word. But are mistakes really so bad? Everybody knows the expression “We can learn from our mistakes” and so, mistakes have a positive side, too. However, there is more, for according to Stanford University psychologist Carol Deck – and I hope that he doesn’t blame me for the shortcut – people who make mistakes are more flexible than those who do not. Basically, so Deck, one can distinguish between people who have a fixed mindset and people with a growth mindset. In a fixed mindset, people believe that their basic qualities, like their intelligence or talent, are simply fixed traits. People with a growth mindset, on the other hand, believe that their most basic abilities can be developed through dedication and hard work. The first type of people tries to gather stuff that supports their ideas, while the second type tries to develop their insights. But just the later kind of people has to try new things and by doing so they have to take chances. And then, you guess it, they run the risk of making mistakes. But this type of people has also a better awareness of their mistakes and what to do with them. Therefore they advance more than those with fixed mindsets (who, alas, often just are the persons with the biggest talents). Thus, open your mind, don’t fear mistakes and in the end you’ll profit by it. A bit like “reculer pour mieux sauter”, as they say in French.
In my last blog I characterized knowledge as methodically justified interpreted belief in order to make clear that it is impossible to say that there is a certain quantity of knowledge (be it measured in bytes or otherwise). I am not alone in characterizing knowledge that way. I want to mention here only Günter Abel, who has a related view (see his “Forms of Knowledge: Problems, Projects, Perspectives”, in Peter Meusberger, Michael Welker, Edgar Wunder (eds.), Clashes of Knowledge, Springer, 2008; pp. 10-33). However, that knowledge cannot be quantified is not only so because it is perspectival. Basically, the question “what is knowledge?” has no unequivocal answer, and what cannot be defined clearly cannot be measured. Take my own characterization of knowledge. Assuming that it is correct, even then it refers only to intellectual knowledge or “knowledge that”, as Ryle has called it, a type of knowledge that has to be distinguished from practical knowledge or “knowledge how” (see my blog dated June 9, 2008). Supposing that we could measure knowledge-that, we would measure only a part of what we know. Maybe all our knowledge-that, our theoretical knowledge, might be caught in books, articles and computer files (which I doubt), but how should we catch and measure all the things that we practically know how to do but that we cannot put into words? For how should we measure the knowledge how to skate or to drive a car, activities that can perhaps be theoretically explained but that we know to do only when we are successfully able to do it? Moreover, for everybody the knowledge how to do it is a bit different: my knowing how to skate is not exactly the same as your knowledge how to skate (for instance because our physical capacities are a bit different). Or what do you think of doing research? The main lines may be listed in handbooks, but many of the choices you have to make are simply a matter of your experience and intuition.
All this becomes even more complicated, when we look at other possible distinctions of knowledge. For besides the distinction between knowledge-that and knowledge how, other classifications can be made. Let me quote Abel just by way of illustration: We can “distinguish … between (a) everyday knowledge (knowing where the letterbox is), (b) theoretical knowledge (knowing that 2+2=4 or, within classical geometry, knowing that within a triangle the sum of the angles equals 180o), (c) action knowledge (knowing how to open a window), and (d) moral or orientational knowledge (knowing what ought to be done in a given situation). Across these [types] of knowledge … the following important distinctions and pairs of concepts have to be taken into account: (a) explicit and implicit (tacit) knowledge, (b) verbal and nonverbal knowledge, (c) propositional knowledge (that which can be articulated in a linguistic proposition) and nonpropositional knowledge (that which is not articulable within a that-clause), (d) knowledge relating to matters of fact and knowledge based on skills and abilities.” (Abel, id: 13).
Should we measure all these different types of knowledge, add them, subtract what we counted more than once and then say: this is the amount of knowledge in the world? But how could we count or estimate everyday knowledge or implicit knowledge, for instance? And how could we say, which is a precondition for the counting task, that there is at least theoretically a fixed quantity of everyday knowledge or implicit knowledge in the world at a certain moment, for instance at 18.56h on November 14, 2011? I think that nobody would endorse the view that we can. But then the idea that there is a total amount knowledge is not realistic.
There is no fixed amount of knowledge, even not at a certain moment. I have asserted this in my last blog and I have explained it in previous blogs. But besides that, I think that the idea there is shows a wrong attitude towards knowledge. It’s a bit like: that’s what we have already. Although it is true that today we “know” a lot more than in the past and that there are good reasons to value it, I think that for a scientist it is the wrong attitude. This becomes clear when we consider what knowledge is: justified true belief, as a standard definition runs. But already this rough definition, which goes back to Plato and which since then has been the starting point for any discussion on what knowledge is (albeit often in the background), raises a lot of questions, such as: When is a belief justified? What is true? What is a belief? For whom does this belief exist? And for each answer, many new questions can be raised. It is not without reason that Karl R. Popper came to the conclusion that “we can never rationally justify a theory … but we can, if we are lucky, rationally justify a preference for one theory out of a set of competing theories, for the time being”. The presently accepted theory is nothing more than the best approximation to the truth we have. (Popper, Objective Knowledge, p. 82) In other words, the right attitude towards knowledge is not: that’s what we have but, as once a Dutch electronics company said, “Let’s make things better”. This can be reached only by not taking the “facts”, the knowledge we have gathered, as a starting point but by starting from the method to come to the facts: Popper’s method of conjectures and refutations, or rather any method that leads to a critical attitude towards the facts. It is the scientific skepticism of Descartes (and before him already Montaigne). That’s one pillar of knowledge. The other pillar or at least another pillar is, of course, man, the one who makes knowledge and for whom knowledge is made. But man as such does not exist; only individual men and women do, and this is the other reason that there is no knowledge as such but only knowledge for someone (or for a community of kindred spirits at most), as Karl-Otto Apel explained. Therefore a characterization of knowledge as methodically justified interpreted belief would better fit with what scientists actually do.
Lucca, Italy: 2000 years old Roman theatre, now in use as an apartment building
Once I discussed here Popper’s rejection of what he called the “commonsense theory of knowledge” or “bucket theory of mind” (see my blog dated April 5, 2010). According this theory, so Popper, there is a fixed quantity of knowledge that we can gather in some way. However, the theory is false for several reasons; in short one can say because knowledge is perspectival. Basically, knowledge is a special way of interpreting the world around us. Of course, it is changing through the ages and we get also new knowledge. But this changing and renewing of knowledge must be compared with the restoration and reconstruction of an old building, not with constructing new buildings instead. The building gets a new painting every odd years; stone walls are restored and get new bricks; wooden beams are maybe replaced by iron beams; new extensions are added and old parts are pulled down; also the interior may change a lot through the years; and after 500 years we have a completely different building with original and modern parts and another appearance; not three or four new buildings. Moreover, the building looks different, depending on whether we are in front of it or look at the rear side and on whether the sun is shining, or whether it is dark. So it is with knowledge, too.
Therefore I was a bit surprised to read in a lecture by Prof. Robert Dijkgraaf, president of the Royal Dutch Academy of Sciences – summarized in the Dutch daily De Volkskrant of the 29th ult. – an estimate of how much knowledge presently exists, namely one zeta byte, or 1021 bytes (I pass over here that in the lecture no difference was made between knowledge, information and data). It gives the suggestion that the existing knowledge is something fixed, albeit continuously growing. It is as if one could say: if we would distribute all knowledge there is among all the people in the world, each person would receive about 143.000.000.000 bytes of knowledge. The problem would be then how to coordinate them.
Actually, the lecture was not about how much knowledge there is but about how to find one’s way through it, which is a problem, indeed, whether one sees knowledge as perspectival or as quantitatively measurable. But I guess that just this distinction makes a big difference in the solution of the way finding problem. From the point of view of the bucket theory, ideally one would try to learn as much knowledge as there is, but alas, this is impossible, so Dijkgraaf proposes a strategic approach: learn those pieces of knowledge (at school, at the university) that have a strategic position in the sense that they are central by having relevant connections with those parts of knowledge that we do not learn and that are also important to know for some reason. The learned pieces of knowledge must give as best an entrance to the not learned pieces as possible. I think it is a conservative approach. It takes the old idea of pumping knowledge into one’s head as a starting point for science and maybe also for being intellectual. On the background (not so much in Dijkgraaf’s lecture but generally in such approaches) often the fear is present that people will lose the old values of learning and in the end certain valued capacities of the brain: the capacities to store facts. The latter is not impossible, for learning values simply do change and brains adapt themselves genetically to new circumstances.
Now, I do not want to deny that knowing facts can be useful and can also be an enrichment of one’s life. But is learning strategic facts really the solution to the problem how to find one’s way in the field of knowledge? I think that from the perspectival view of knowledge a methodological approach is more obvious: learning methods how to find knowledge rather than learning strategic points from where to start. If knowledge is perspectival, and when the perspectives are continuously changing in addition, I think that it is more important to know the right questions to ask in order to find your way than knowing the right places to start. For the appearances of these places are continuously changing. In other words: learn how to find your way, not from where to find your way.
The strategic facts approach and the methodological approach do not completely exclude each other, of course. For one thing, the methodological approach is not possible without a basic knowledge of facts. For another, once one knows the strategic facts, one must know how to find one’s way to the not learned facts. However, the differences between both approaches are fundamental: they are based on a different view what knowledge is.
The methodological approach asks for another mental attitude and in view of what is known about the development of the brain, it is not unlikely that it will lead to a genetic change of the brain, if it will become the leading approach to the world of knowledge. But should that be regretted? I don’t think so. Genetic changes of the brain are normal. They have taken place as long as man exists and they have led to a better adaption to the world around us. This does not imply that this will also be the case in future, no more than that it implies that our present genetic constitution is the best one for a changing world. Maybe it is, maybe it isn’t, but we simply cannot deny what happens to be.
The Essays by Montaigne, but also his report of his journey through Europe, keep holding your interest, how often you read them. And not only is it interesting to read what Montaigne wrote himself, it is also interesting to read what others have written about him and his books. Therefore, not only you can find Montaigne’s Essays always on the table in my study, ready to be opened (instead of somewhere hidden between the books in my book cases), but now and then I read also one of those comments on Montaigne that I happen to come accross, and I do not find it annoying when someone tells me about Montaigne what I had read already in one of the other comments. So, no wonder that I have a little library of Montaigne commentaries. The latest one I added was Saul Frampton’s When I Am Playing with My Cat, How Do I Know She Is Not Playing with Me?, and this quotation – for the title is a quotation from the Essays – says already a lot about the man Montaigne was: A man who was looking at the daily things of the world around him and who asked surprising questions about it. Moreover he was a man with an eye for cultural differences, which becomes especially clear from his travel journey (which was written for personal use, however, not for publication). My remarks about Montaigne are not original, I know it, but it is always nice to discover things anew, even when “everybody” knows them, and to be pointed to facts that many other people know already but just you don’t. It is way of developing your mind. In order to show richness of Montaigne’s thoughts, I give here a few quotations from the Essays. I have no pretention that they are the most important ones or are a kind of summary of the work. The Essays simply cannot be summarized. I took just a few passages that I had underlined in the book, and I did not underline many other passages which are by far more worth to be stressed. Just read them, enjoy them and think about them.
- Of course, you know this one already, but in case you don’t: When seated upon the most elevated throne in the world, we are but seated upon our breech.
- Many faults escape our eye, but the infirmity of judgment consists in not being able to discern them, when by another laid open to us.
- Things most unknowne are fittest to be deified.
- He that should fardle-up a bundle or huddle of the fooleries of mans wisdome, might recount wonders.
- Why, in giving your estimate of a man, do you prize him wrapped and muffled up in clothes? He then discovers nothing to you but such parts as are not in the least his own, and conceals those by which alone one may rightly judge of his value.
- Miracles appear to be so, according to our ignorance of nature, and not according to the essence of nature.
- In truth, custom is a violent and treacherous schoolmistress.
- In truth, it is not want, but rather abundance, that creates avarice.
- Of all the follies of the world, that which is most universally received is the solicitude of reputation and glory.
- The conduct of our lives is the true mirror of our doctrine.
Well, of course, it’s my fault, for I am a simple philosopher sitting in his room in his private ivory tower. But now and then I come down, as my dear readers may have noticed, in order to look for stuff for my blogs, either live by travelling, or virtually by surfing on the Internet or reading printed stuff. And so I discovered a few days ago that the technological developments have already progressed more than what I had thought. In my blog last week I saw mind reading as something far away in the future, but what did I discover? It’s already among us. No, I do not mean the mirror neurons in our head that read for instance, as my readers may remember, the feelings, emotions, intentions and so on of other persons. I mean an artificial device that really can read what is going on in our brains. The gadget I found on the Internet is a kind of head set. You have to connect it with a television, which interprets then your brain waves. Or use your brain waves for controlling computer games or other software programs. The Australian engineer Adam Wilson made such an application known to the world by using it for sending a Twitter message. But people suffering from brain disabilities and paralysis will be able to use it for controlling their wheelchairs. I did not find it yet in the stories and messages on the Internet (but it may be my fault) but what to think of using such a head set as a kind of mobile telephone? Just set a mind reading set on your head and let your partner do the same, and you can communicate just by thinking! You do not need to say a word any longer. Just think. One problem to be solved, of course, is that probably there will be also other thoughts in your head, so the mind reading set must learn to select the right ones. But it will make communication much easier. So you’ll be able to tell to your partner what you cannot put into words.
Communication will be more direct then. But be careful, for maybe your partner hears also what you want to keep secret. One step further will be to read your thoughts, even when you do not have a mind reading set on your head. You enter a building and a mind scanner in the doorpost reads whether you want to make an attempt or whether you are there for decent reason. Even better, place such scanners on every street corner, like it is done now with surveillance cameras. The uses will be infinite (as will be its misuses). From a philosophical point of view all these developments are very interesting and they will certainly make that concepts like freedom and personal identity will be given a new meaning.
Big Brother is watching us. I have discussed this theme already several times. Facebook wants to have our data for commercial reasons. State authorities want to have our data and follow what we do because we might be possible criminals. We find cameras everywhere for controlling our behaviour. But at least our thoughts are free, as a famous German song composed about 200 years ago says. The idea is much older, though. It had already been expressed by Cicero in Antiquity and then later, for instance, by the German mediaeval poet Walther von der Vogelweide. We still think so today, but how long yet?
The idea that our thoughts are free wants to express that we can think what we like because our thoughts are hidden for others. Even prison cannot limit them. And although our thoughts tend to be directed to what is socially and culturally acceptable and by what we have learned, everybody who wants to develop his or her own thoughts is free to do that.
It is to be hoped that thoughts will remain free in the sense that we can think what we want to think, but there are signs that the time is near that they’ll not be hidden any longer. Once I wrote a blog about a research that showed that a brain scanner can reveal our intentions better than we can. Now they can scan our brain also in order to see what we have done. At least the first steps have been taken. Researchers presented film fragments to three test subjects and while these persons were watching them their brains were scanned. Then the researchers begun to search video clips on YouTube, and with the help of the scanner data, a special computer program and some other computer work they succeeded to reconstruct the film fragments the tests subjects had seen (see http://gallantlab.org/). Of course, these reconstructions were possible because the researchers knew already what film fragments they were looking for, but the next step will certainly be that they can also reconstruct what we have seen without such reference material. The whole procedure is still very complicated and time-consuming, but I guess that the time will come that it will be a matter of seconds and with a much higher quality of the results. One step more and it will be possible to “read” not only what we are doing at the moment that our brain is scanned but also to find back what we did in the past, so to read our memories. Then, the uses will be legion. It will be easier to solve crimes but also to repress unwanted behaviour (just have every citizen scanned his or her brain once a month).
All this sounds like science fiction, but wasn’t the myth of Icarus flying through the air also a kind of science fiction in Antiquity? And now we can fly, albeit it in a different way than Icarus did. Fiction often becomes fact, and it is to be expected that this will also occur for brain reading. Thoughts are not representations of what we do or have done, but they’ll certainly be influenced by the idea that the present representations in our brain and our memories can easily be read. And once these can be read, it is not unlikely that our thoughts can be read as well. Then they’ll no longer be free.
I think that one reason why it is often thought that we do not have a free will is that it has come out that most of the processes in our brain are unconscious. And then the conclusion is easily drawn that what happens unconsciously happens without our will. As I have explained in other blogs, this conclusion does not follow. One simply needs more evidence for it. (see for instance my blog dated September 13, 2010) This does not mean, of course, that all things we do occur with our will. What I do maintain, however, is that fundamentally we have a free will and that within the limits of our body and the situation we have choices. We can plan actions long before they take place, and even at the last moment we can often choose what to do, too. But in fact, most of these free chosen actions are worked out unconsciously. How else could it be in view of the limited capacity for conscious processes in our mind?
Then it is an interesting question how the unconscious part within us works. In their article “The Unbearable Automaticity of Being” the psychologists John A. Bargh and Tanya L. Chartrand shed some light on it. In this blog I can only touch their analysis, but in short they see three processes at work that determine our unconscious reactions or forms of automatic self-regulation, as they call them. The first one is an “automatic effect of perception on action”: We see other people doing things and when it fits the ideas that were stored before in our head – if nor our prejudices –, we are going to act in the same way. Although Bargh and Chartrand do not mention it, it makes me think of what the recently discovered mirrors neurons make us do (see my blogs of June 27, 2011 and later). In other words: we act automatically in a certain way because we see others doing it that way.
The second automatism is “automatic goal pursuit”: For one reason or another we have developed certain goals in our mind and they are automatically activated when we happen to meet the right circumstances where we can pursuit them. However, in order to acquire these automatisms we often need first a conscious learning process that gives us the right behaviour. Once we have internalized the learned behaviour it becomes automatic, like driving a car, for instance. We can call these automatisms skill, experience, practice or routine. They can also be acquired by unconscious processes that are different from explicit learning. Once in a situation where we need to apply our skill we behave automatically in the right way.
Bargh and Chartrand describe the third automatism as “continual automatic evaluation of one’s experience”. Evaluating whether an object or event is good or bad is often seen as a conscious process, but in many cases it does not happen so freely, as the authors point out. Our evaluations are often (if not usually) activated directly without needing to think about it and even without being aware that we classified a person or event as good or bad. They just happen. When they happen they can influence our mood and even our emotions or they can influence our behaviour like avoiding places that arouse unpleasant feelings.
Actually, all these processes are not so different from what we freely and consciously do, for we can see them as stored free will, or at least a big part of our reactions can be seen that way, namely to the extent that they are the result of learning and of handling the experiences of life.
In the Netherlands (and not only there) a lively debate is going on about the question whether man does or doesn’t have a free will. On the one hand there are those like the brain researchersDick Swaab and Victor Lamme who deny that we have a free will; on the other hand there are philosophers like Daan Evers, Niels van Miltenburg and others who reply that present research does not substantiate that view. I have discussed the views of Swaab and Lamme before in my blogs and rejected them with about the same arguments as used by Evers and Miltenburg. However, whichever side may be right, both views lead to intriguing questions. Suppose that there is no free will, does this mean then that our will is determined in the sense that if I know the present state of the body and the world around us that we can deduce what this person wants after ten years? If such a “deterministic” determination does not exist (as is defended by some philosophers), what determines then our choices and our criteria for choosing? And “who” applies them? (of course, we can ask the same questions for the birds and other animals in my garden).
But if the will is free, how far does this freedom go? It certainly is not without limits for we are bound by our bodily constitution and the world around us. Freedom of the will can only be a freedom within borders, or, more positively formulated, it is the freedom to choose from a certain number of possibilities according to a certain number of criteria.
The most intriguing question is, of course, whether all this has sense. I mean: either there is a free will and the view that there isn’t cannot change that; or there isn’t a free will and this determines that some people think there is although there isn’t. For would any philosophical idea have an influence on “real life”? Aren’t philosophical ideas just epiphenomena like the other products of our mind, as many neuroscientists and philosophers (Churchland, for instance) tend to think? Maybe they are, but even so, there are indications that the function of the mind is a bit more complicated than just this and that the mind has an important function in steering what we do.
It is not a proof, but that it can be so is suggested in a study by Roy F. Baumeister and others that suggests that certain beliefs can be advantageous for you. For what did they find: “possessing a belief in free will predicted better career attitudes and actual job performance. The effect of free will beliefs on job performance indicators were over and above well-established predictors such as conscientiousness, locus of control, and Protestant work ethic.” (quoted from the abstract on http://spp.sagepub.com/content/1/1/43.abstract). In other words, actually it is not so important whether the will is free or not. Even if it is not free, you can better believe it is, for it is good for your career (and who knows, maybe for other important facts of life as well). And it is better not consider the question of the free will as an interesting academic question, for if you deny that there is a free will, you are less well off than your colleagues who think that there is. So you, readers of my blog, be warned: now that I know this I do no longer defend the idea that the will is free because I believe in it but because it is better for me (supposing that I am free to choose this position, of course). And the already excellent careers of Swaab and Lamme would still have been even more excellent if they had used this information.
Evil is in the eye of the beholder and if we do not see it we create it in our mind, in order to justify why we took action. These are two lessons that one can learn from Roy F. Baumeister’s book Evil. Inside human violence and cruelty. Of course, the second part of this thesis is not true under all circumstances and Baumeister does not say that. Moreover, there are some kinds of behaviour that objectively can be qualified as evil, even when the perpetrator may have a different view. Intentionally murdering innocent people like passersby; the Holocaust... But here I do not want to discuss that. What I do want to discuss is that it often happens that what is evil is constructed in the mind. This can be seen in war, for instance. Wars are fought for many reasons, but it is often difficult to make these reasons clear to the people who have to fight them, whether your reasons are good or whether they aren’t (and whether your reasons are really good is often a point of discussion). But you need the support of your people for you need soldiers. Then there is a simple solution: Depict the enemy as evil. If your enemy is seen as pure evil, you do not need further justification for your war. Success is guaranteed. Undecided loyalties are won over to your advantage.
“Perhaps the most famous example of this in the twentieth century”, as Baumeister calls it, was the British propaganda for getting soldiers in the First World War. And the same approach appeared to be effective in Australia and the USA, for instance. The Germans were depicted as cruel Huns and the allied forces got their troops. That most atrocities ascribed to the “Huns” were extreme exaggerations or simply false seems then an irrelevant footnote for post-war historians, as long as those who became soldiers believed in it. In view of this, it is striking that this image of the Germans as devils almost disappeared, as soon as the soldiers were there, at the front. This is at least the impression, when one reads autobiographic novels and soldiers’ diaries about the First World War. The Germans are often called “Huns”, it is true, but the picture one gets from the novels and diaries (and I have read dozens of them) is not that the French, British, Americans and so on shoot at the Germans because they are evil but because they are the enemy and because, once you are there, you have to defend yourself and kill those on the other side in order to survive. Only German soldiers operating machine-guns and snipers are seen as evil, because they kill so many people and because of the way they do (forgetting that there are also machine-gun operators and snipers on the allied side). Enemy soldiers that flee are not shot down because they are evil but because they can become a future danger, because they are the enemy, and because it is your task to do so (again a case that shows that the situation influences to a large extent what you do). Snow, the soldier in my last blog who killed his first opponent, did it only because he was there to do that, and he got remorse. Reflecting on the incident, Lynch, who describes the situation and who stood next to Snow, calls him even the murderer and the German soldier the victim. Later in the book (and not only in this book) German soldiers killed on the battle field are often called “poor guys”. “Is that civilization?”, one of Lynch’s comrades sighs when seeing all the victims of both sides. Was such a remark to be expected if the enemy was really seen as evil?
All this shows, I think, that evil in a complex situation has different levels of construction. What is considered or presented as evil on one level (in my example: the level of the government) may be reversed on another level (here: the level of the soldiers). Evildoer and victim change positions, so it seems sometimes. “C’est la guerre” (That’s war), Lynch concludes. But is that a sufficient justification?
Australian Memorial, Pozières, Somme region, commemorating the heavy
battle fought by the Australian soldiers when conquering the hill here.
I just finished reading Somme Mud by E.P.F. Lynch. It is an autobiographic novel about the author’s experiences as a soldier on the Western Front of the First World War (so in France and Belgium), although Lynch denied that the novel was about himself. Lynch, an Australian, served voluntarily as a member of Anzac, the Australian and New Zealand Army Corps. His penetrating descriptions of the fights and the battle fields can be compared with those by Ernst Jünger in his Storm of Steel. You feel yourself in the skin of Lynch, to the extent that such a thing is possible, of course, for you miss the stench and the noise, for instance.
We follow Lynch from his departure from Australia to his first battle on the Somme and then to the other battles he participated in – including the heaviest ones like the battles on the Messines Ridge and near Passendale – till the end of the war, when the front begun to move, the armistice and Lynch’s return to Australia. Lynch became five times wounded but surprisingly, in view of the many very dangerous situations he got through, he survived it, as did his little inner group of comrades, with the exception of one.
I can write a lot about the book, and I can compare it also with other novels of the First World war and with other soldier’s autobiographies, but here I want to bring forward one thing that is related to what I wrote already about in these blogs. The more I came to the end of the book, the more it made me think of what Arendt wrote on the banality of evil and of what Zimbardo wrote on the being situated of what we do: that it is the situation that makes you a devil or a hero. You can see this also in this book, although the situated behaviour did not develop as quickly as it did in Zimbardo’s prison experiment (Zimbardo had to break off his experiment already after six days; see my blog dated March 14, 2011). Here it was a matter of months and years. I read the book in a Dutch translation, so I cannot give verbal quotations, but somewhere at the end, Lynch makes a comparison between civil life and army life: A man does not enter a pub, so he says, in order to become drunk, but once he is there, he does the same as the others do and in the end he becomes smashed. In the army it is not different. A man does not come into action with the intention to kill his fellow man, but with a grenade or a bayonet in his hands he will do exactly the same as his comrades do and he will use them fully. Isn’t there a better description of the fact that the situation makes us do what we do? Of Zimbardo’s conclusion that it are not psychological dispositions that make people behave in an evil (or heroic) way but that it is the situation that brings people that far? Of the banality of evil in Arendt’s sense? (Arendt stressed the wrong side of what we do, but as Zimbardo made clear and as can be seen in Lynch’s book, too, the same is true for heroism).
To take another example, in the beginning of the book, Snow, one of Lynch’s comrades of the inner group, sees a German soldier walking to his line with a pack on his back. It is the first enemy they see. Snow shoots him down but gets pangs of conscience. Later in the book, and especially at the end, all feelings of remorse for killing have gone. Every German who has not surrendered is killed, if possible, including Germans who have left their positions and flee; who want to surrender but haven’t done it quickly enough, and so on. It is no problem to shoot them down. Seeing the killings and the heavy mutilated bodies on the battlefield, one of Lynch’s comrades says: “Is that civilization?”
Yet, most soldiers were conscripts or volunteers – some very young, some older, some already relatively old. They were ordinary civilians, before they went to this war; people who before the war probably never would have thought that they would be able to participate in such mass killings, or, on the other hand, to run forward into the bullets of the machine guns of the enemy, just because they “had to”. Apparently the situation is stronger than your will, at least often, and Lynche’s book is a good illustration of it.
The mainstream of the philosophers who discuss the question what makes up our personal identity defend the so-called “psychological view”, which states that our identity is in our memory and our psychological characteristics: a person’s identity remains the same as long as s/he can still remember past facts of his or her life or as long as s/he remained unchanged in other psychological respects between some point of time in the past and the present. In short, what makes who we are, our identity, is fundamentally in our past. I have talked about this before in my blogs and I have also criticized this view, putting forward that our bodily characteristics are as much important as our psychological characteristics are (and why else give many people so much attention to their body and their physical appearance?). Others point to the relevance of factors like the group you belong to, your profession and work, and so on that have a more sociological character and that refer rather to what makes you here and now than to what you were, as the factors mentioned in the psychological view do.
But how about our future? This may seem an odd question, and I do not want to say that what happens with you, say, ten years after today is important for your identity now. Nevertheless, it may be that the future has a role in making you. An indication of this is given in a study by the Russian neuropsychologist Alexander Luria, which was mentioned by Richard Sorabji in his Self (here I make use of Sorabji for a part). Luria followed for some 25 years a soldier, Zasetsky, who had lost a big part of his memory through a shot in his brain during the Second World War. His amnesia regarded both parts of his episodic memory and abilities like reading and writing. Since he got the injury Zasetsky, spent all his time to regain his life and what he had lost, to rediscover who he was, and to write his efforts down in a diary. This had become his life’s project.
Does this tell us something about our personal identity? Your memory is important for you, that’s clear, but probably more important for you is what to make of your future. Anyway, as Luria comments here, those of his patients that had lost their ability to plan their future disintegrated far more than those who had lost their memories. At least in order to keep your identity intact, apparently more important for you is making plans for the time to come than preserving what you lived through. An orientation to the future may be more relevant for our identity than being conscious of our past, as the psychological view states.
A discussion is going on in the philosophy of science on the question whether understanding phenomena is relevant for science and, if so, how it takes place. This is to such an extent remarkable that the main stream of science has always propagated the view that it is the aim of science to explain phenomena and that there is no room for understanding because it is considered subjective and in science there should be no place for subjectivity. Since Carl G. Hempel developed his famous deductive-nomological model of explanation the main stream view maintained that for explaining in science only the relation between the phenomena, other phenomena and an explaining theory was relevant. This theory was supposed to be valid at any place and at any time. In this view there was no room for an investigating subject that did the research and for whom the relation investigated had to be true, not to speak of a wider research community and a wider public. This idea of science was supposed to be valid for all kinds of investigation that bore the label “scientific” in some way, from sociology and psychology to physics and biology. Of course, opposition to this view did exist, but from the side of the main stream it was often disparagingly called “metaphysical”.
But times are changing and so here, too. Influenced by proposals by the Dutch philosopher Henk de Regt and others more and more it is accepted that the investigator (and with him or her the whole scientific community) is important in the explanation process. More exactly, they say that science is not only about the relation “x explains y” (whereby x is a theory, while y is what is to be explained”, but it is about the relation “x explains y for (the knowing subject) z” (Karl-Otto Apel, Die Erklären-Verstehen Kontroverse in transzendental pragmatischer Sicht, 1979; p. 267; there is also an English translation of this book). In this view there is not only room for understanding, but it is an essential part of it. What I find annoying in the present discussion about the place of the knowing subject and the interpretative part in the scientific process is that there is hardly any reference to the “old” opponents against the two-dimensional view of the scientific process, so to Apel in the first place.
What does the “new” interpretative part of science involve? How must we imagine it? In a blog like this I can give only a few hints. In my dissertation about understanding human actions, I have defended the view that interpretation is placing a phenomenon in a kind of mental scheme of the type as developed by Schank and Abelson, which I have mentioned already several times in my blogs. Maybe this can be related to the idea of the significance of model construction for understanding in the way as it has been proposed by de Regt. In my dissertation I have also developed the view that in order to understand human actions we have to answer three questions, namely the questions 1) what an action is; 2) what an action is for; and 3) why an action is performed. The first question asks for a description of an action, the second one for its intention and purpose and the third one for the reasons behind the action: what made the agent to perform this act. By extension (and of course adapted to the object of investigation) these questions apply for understanding in the social sciences in general. Do these questions also apply for science in general? If we forget for a moment that they are about understanding actions in a narrow sense and not for the scientific process in general, maybe they apply not exactly as they stand here. However, I think that they are a good starting point for thinking about what understanding involves when we say that an investigator not only explains a phenomenon but also that the resulting explanation is understood.
I am not so happy now, for I am using my computer. At least, that is what I have to conclude from a study by Matthew A. Killingsworth and Daniel T. Gilbert of Harvard University. Actually it is not using my home computer as such that makes me unhappy but that it makes my mind wandering. As the title of their study indicates, “A Wandering Mind Is an Unhappy Mind”, so a mind that does not concentrate on the task that it is supposed to do is not happy.
I’ll skip the methodological details, but the researchers asked 2250 people what they were doing, what they were feeling then and whether they were thinking about something else. The surprising result was that most of the time we do not think about what we do: our mind “wanders” and is full of other thoughts. No less than 30% and up to nearly half of the time we spend on an activity we are thinking about something else: about what we did yesterday, about the meal we have to prepare this evening, about our next holiday, and so on. And what turned out, too: this mind wandering makes us unhappy. What we need for being happy is concentration. Do what you do, and nothing else. But as everybody knows, the mind tends to be easily distracted. And so we are unhappiest when we are using our home computer, when we are working and when we are resting. The two latter activities are a bit contradictory, for if both working and resting makes us unhappy, what else can we do in order to feel well? Apparently there are activities that are seen as neither the first nor the second, and it is these that make us happy. Therefore, make love! There is no other activity that requires more concentration. But there are also good alternatives: exercise, or engaging in conservation. They make us happy, too.
Mind wandering has also a positive side: the more you day dream, the more creative you are. It is also important in evaluating how you behaved towards other people and how you’ll behave in future. So, it brings you benefits, but not without emotional costs.
The upshot is that for being happy, you have to concentrate on what you do, on the here and now, something that some religions tell you, too. But should I stop writing my blogs, because using my computer makes me unhappy? I have my doubts, for when I let my mind now wander over the blog that I have just written, I realize that writing it required much concentration. Maybe it is true on the average that using your home computer makes you unhappy; according to me it cannot be taken as a general rule. Anyway, I feel well by writing these sentences. However, after an hour or so, my blog is finished and I am longing for a rest. Then the study by Killingsworth and Gilbert explains to me that it is no good idea. Happily, there is something else I can do so that my state of happiness will continue: to take my bike and make a ride.
When interpreters analyse a text, they tend to see more in it than the author originally had put into it: they often think to find a meaning behind the text that the writer was unaware of. This textual interpretation, which originally goes back to the exegesis of the Bible, led later – under the influence of Dilthey – to the thought that other forms of human life could be considered as texts as well. So, the idea developed that also actions can be interpreted as texts. A recent form of this, which I have discussed in other blogs, is the view that an action can have different descriptions, which actually is the same as the idea that it can have different interpretations. For instance, opening the door of your house can be interpreted as it is, but sometimes it can also be described as warning the thief who is upstairs, because he hears the noise. Interpreting human life need not be limited to human activities as such, like actions, but can also be extended to the material products of what men do: the tools they make, the houses they build, their art, and much more. These human products get often a symbolic meaning that exceeds by far what the makers of these products had laid in it when they made it.
A recent critic of the symbolic interpretation of material culture is Nicole Boivin, especially in her Material cultures, material minds. Boivin does not deny that material cultural expressions can have symbolic meanings, but what is problematic in this approach in cultural anthropology and archaeology is according to her that the symbolic meanings are often so much seen as the only correct interpretation of these material expressions that plain interpretations are rejected, even when both types of interpretations could be equally true or the plain interpretation may be better. Boivin uses an example of her own, but take this. In the Dutch villages Staphorst and Rouveen, traditionally the window frames and doors have the colours green, white and blue; green symbolizing young life in nature, white symbolizing purity and blue being used because it averts calamities. But need it really be so that a modern farmer painting his farm, or a city-dweller who has bought a farm there as a second house give these meanings to the colours? Of course not. When we ask the city-dweller, for instance, why she paints her farmhouse with these colours, there is a good chance that she knows nothing about their original meanings. Probably she’ll answer that she wants to maintain the traditional view of the village; that she wants to have the appearance of her house in agreement with the neighbouring farms; or simply that she likes the colours. Or maybe it is so that the local acts prescribe these colours. In other words, it is quite well possible that there is no symbolic meaning behind the present material expression of the painting; that the original meaning of the colours has been lost; and that nowadays the colours are used for quite banal reasons.
This makes me think of the explanation of the Japanese tea ceremony which I once received during the “Japanese week” in the town where I live. The woman performing the tea ceremony told about its religious meaning and the special feelings it arouses in the participants. I did not doubt it, but I had a good pen friend in Japan that performed the tea ceremony now and then, so I asked her what she thought of it. Well, she said, I’ll not deny this explanation and it is true that some tea ceremony performers get a special feeling by doing it, but for many Japanese it is simply a thing they sometimes do and for me it is just a hobby…
Most people outside Japan do not realize that the Fukushima calamity is still a part of the daily reality for many Japanese. One thing that actually hasn’t been solved so far is the threatening “meltdown” of the power plant, which can be stopped only, as a website explains, by “suppress[ing] the nuclear chain reaction by inserting control rods into the reactor core and … gradually cool[ing] the fuel rods with constantly circulating water.” However, the cooling system of the Fukushima nuclear plant has been destroyed, too. A temporary solution has been found by hosing the nuclear reaction system with water, but it is only a short-term solution. Moreover it can lead to nuclear pollution of the environment. So a more permanent solution has to be found, like repairing the damaged cooling system or replacing it by a new one. But as the same website says: “Repair or installation of the cooling system will unavoidably be conducted in an environment highly contaminated with radioactive elements with serious risk of future health complications.” Since apparently this cannot be done by robots, the question arises then: who should do the job? According to Yastel Yamada, a business consultant who manages the website just mentioned, it is not expedient to take younger people for it: “Young people with a long future should not have to be placed in a position of having to undertake such a task. Radiation exposure of a generation which may reproduce the next generation should be avoided, regardless of the amount.” Besides, it is not they who are responsible for the construction of nuclear power plants but it is the older generation that is and this generation has also benefitted most of them. Therefore a “Skilled Veterans Corps” should be formed consisting of “volunteers of veteran technicians and engineers who are much more qualified to carry out the work with much better on-site judgment.”
At first sight this argument sounds reasonable but is it really so? There may be nothing against using volunteers for doing the job, but what I am annoyed about in the argument are the implicit (and partly explicit) suppositions that the older generation is guilty, anyhow, of the construction of the Fukushima power plant and that a younger life has more value than an older life. I think that there are good reasons to question both suppositions. Here I do not have the space for an extensive discussion of the arguments, so I want to limit myself to a few remarks. First, a dichotomy between a younger and older generation does not exist. A generation is a sociological category but in fact age differences are on a continuum. Should we then introduce degrees of responsibility and degrees of eligibility for the Volunteer Corps? Second, the “younger generation” may not be responsible for the construction of the Fukushima nuclear power plant, but did it protest against it? And didn’t it profit by the plant as well? Third, “generation” is a sociological category and generations do not exist as such, as I just explained; only individuals exist. How can we make then the “older generation” as a whole responsible for the Fukushima power plant, despite the attitude of its individual members to the plant and despite the degree they have profited by it? Fourth, how to weigh a younger person of say 40 years old who becomes ill after 30 years because of exposure to radiation by repairing the Fukushima power plant against a person of 69 years old who becomes ill after one year, as may happen? Fifth, that the still to be born must not be exposed to radiation is a strong point, but many young people have already children and do not want to have more or they can choose not to have children. Sixth, how to value a life? Hasn’t life a value as such? How to weigh a 70 years old person (who might become 100 years old) against a younger person (who might die young?). Since the length of life can be statistically indicated (but only for a “generation”, not for individuals), is statistics then, for example, a good foundation to value the worth of a life? Or the kind of education a person has received? Or what else? What matters in what the value of a person is and who values?
These are only a few comments that occurred to me when I read the proposal for a Skilled Veterans Corps for repairing the cooling system of the Fukushima power plant, and certainly more can be added, against it and in support of it.
When my wife and I arrived in Skjolden, I did not recognize the little town. Many years ago I had been there with friends, on a tour through Norway. Then I knew about Wittgenstein, of course, although I was not yet very interested in philosophy in those days. However, I did not know that he had built a log cabin there, on the other side of the lake and that he had lived and worked there now and then between 1913 and 1951.
We put up our tent on a camping site a few kilometres from Skjolden, almost under a waterfall. An information board described a path to the place where Wittgenstein’s cabin had been. It was a walk of about 45 minutes, but the last part was steep and dangerous over the rocks.
The next day I felt a bit sick. But okay, I was there for “visiting” Wittgenstein. I took the bag with my cameras, a bit to drink, too, and there we went, my wife and I. First along a tractor path, then through a meadow. We entered a little wood and the path became rocky. It became steeper and steeper, too, and heavy going. I stopped for a moment. My wife, who was some 20 metres ahead of me, said: “I’ll take a look whether it is still far to go”, and gone was she.
When she came back ten minutes later, she had already been “there”. The path had become even steeper, and also a bit slippery. “Not much to see”, my wife said. “Some stone foundations of the log cabin and an Austrian flag”. I am a bad climber and did not feel well, so I decided not to go on.
Back in our tent, my wife showed me the photos she had taken. Next we drove again to Skjolden. Now we knew exactly the place where the log cabin had been. It was on a slope some 30 metres above the surface of the lake. Through my binoculars the foundation and the flag were clearly visible. I wondered how Wittgenstein did the dangerous climb a few times a week for collecting his mail, in summer and in winter. And how he had got the building material there. Elsewhere in Skjolden we saw the house where Wittgenstein had lived during his first stay.
In my last blog I suggested that the existence of mirror neurons may be the solution of the sociological problem of the tuning of individual behaviour to group behaviour; so why people behave as a group animal. Psychologically mirror neurons make that we can feel empathy, easily can learn behaviour and actions and much more. I think that mirror neurons may also offer a solution to an old problem in philosophy: the other mind problem. Or at least they may put the problem in a different light.
The essence of the other minds problem is the question: How can we know that others have minds? So, how can we know that they are not zombies (zombies in the philosophical sense, so mere automata)? Formulated this way the other minds problem is an epistemological problem, a problem about knowing. Thomas Nagel replaces it by the conceptual problem of “how I can understand the attribution of mental states to others” (The view from nowhere, p. 19; italics Nagel), which brings us a step nearer to the mirror neurons. However, if we see the solution of the other minds problem in these neurons, the solution is in the way man is constructed, so then it is ontological.
Of course, one can always remain skeptical and say that the problem cannot be solved, but I think that the essential flaw so far is the intrinsically individualistic approach of the problem. Nagel says: “… to understand that there are other people in the world as well, one must be able to conceive of experiences of which one is not the subject: experiences that are not present to oneself. To do this it is necessary to have a general conception of subjects of experience and to place oneself under it as an instance. It cannot be done by extending to the idea of what is immediately felt into other people’s bodies …’ And a few sentences further: “The problem is that other people seem to be part of the external world…” (p. 20; italics mine). What’s wrong with this is that the conception of man in this quotation implicates that we have an individual and another one (and another one and another and another one….) and that these individuals are external to each other and that it is actually not possible to bridge the gap.
There is no space here to follow Nagel’s approach (which he sees in taking different perspectives or views), but that the portrayal of man presented here is wrong becomes clear once one knows about mirror neurons. Mirror neurons do just what cannot be done according to Nagel: extending the idea of what is immediately felt into other people’s bodies. For it is this what mirror neurons do: reflecting the inner world of others in yourself. It is true, this cannot happen in a 100% reliable way, but it is what they fundamentally do and it is also this what fundamentally makes man the group animal s/he is. Individual man grows up by imitating and simulating what other people do and by internalizing and creatively adapting their forms of behaviour. In a certain sense man mirrors other men. This is only possible if the other men have the same kind of mind as the mirroring man has. For if this weren’t so, the latter couldn’t become the mind possessing group being that s/he is, for what is mirrored in his or her inner self would then not be the other men’s minds, but the zombies who they are, and the mirroring man would become a zombie, too. So if you have a mind, other people have minds, too.
At first sight mirror neurons look relevant only for sciences that study the individual, like psychology where they can help explain and understand many phenomena. But how about the social sciences, for example sociology? Social sciences have collective phenomena as their objects, explaining why many people together behave or act in a certain way. This can be group behaviour, for example when a sociologist studies organisations; it can be aggregated individual behaviour, for example when a sociologist studies voting patterns related to the sociological background characteristics of the voters; or it can be a mixture of both, for example when a sociologist studies social movements. And there are many other themes, too, in which collective behaviour plays a part in some way play (peace research, for instance). But how could such an individual phenomenon as mirror neurons be useful here? Isn’t it a well-known fallacy to see collective phenomena, social phenomena, just as individual phenomena packed together?
If I would plead for a reduction of collective phenomena to a mere piling up of individual actions and pieces of behavior without interactions, I would walk into that fallacy, indeed. Moreover, I do not want to say that mirror neurons are relevant for every theme in the social sciences. Despite that, there is a place for them, I think. There are several sociological approaches, but wasn’t it Max Weber who in a famous definition of sociology founded social action on individual actions? For he defined sociology as: “the science whose object is to interpret the meaning of social action and thereby give a causal explanation of the way in which the action proceeds and the effects which it produces. By ‘action’ in this definition is meant the human behaviour when and to the extent that the agent or agents see it as subjectively meaningful ... the meaning to which we refer may be either (a) the meaning actually intended either by an individual agent on a particular historical occasion or by a number of agents on an approximate average in a given set of cases, or (b) the meaning attributed to the agent or agents, as types, in a pure type constructed in the abstract.” (Economy and Society, § 1; italics mine; translation Wikipedia). Seen this way, for instance, I think there is room for mirror neurons in order to understand and explain social phenomena. Mirror neurons can help us make clear for what reasons and from what causes people react to other people, to people around them, anyhow, and why they react in a certain way. They help us understand and explain why people don’t ignore other people but why they pay attention to them and why they react to them. As I see it, mirror neurons are the “missing link” between individual and group, between individual and society. It is a bit as if mirror neurons glue individuals watching each other together.
One of the recent discoveries in neuroscience is the existence of mirror neurones. Mirror neurons are neurons in your brain that fire when you act. But they fire also when you see someone else performing an action and by doing so these neurons reflect or “mirror” the other person’s action in your own mind. Therefore it is as if you place yourself in the other person’s position and as if you are doing his or her action yourself. The importance of mirror neurons is still a matter of much speculation for actually the research is yet in its early stages, but some neuroscientists consider it one of the most important discoveries in neuroscience and I think they are right.
Anyway, if it is really true that by means of mirror neurons it is possible to place yourself mentally in the position of the other, mirror neurons may be important for explaining some significant human phenomena, especially those that require imitative behaviour or imitative imagination. We can think of understanding someone’s intentions (s/he does what I would do in the same situation), empathy (feeling what s/he feels), language learning, gender differences, understanding why people imitate other people or follow them, and much more. Autism may be explained or partially explained by a defect in the mirror neurons.
I had heard several times about mirror neurons and I found them intriguing. So I read a book about it (Mirroring People by Marco Iacoboni) and much of the functioning of human behaviour and feelings became clearer to me: how and why we react to other people (in some circumstances) and the like. Two weeks ago I went to a performance of Puccini’s opera La Bohème, an opera that I saw live for the first time. The production and the singers were very good and the death scene at the end was very well brought and it was very emotional, so that I became emotional, too. Suddenly I thought: my mirror neurons are firing! In case I have such a thought that transcends my feeling or thinking, usually it is so that the feeling or thinking stops immediately and that emotionally I distance myself more or less from what I see or are participating in (which does not need to imply, however, that the event or scene I see or participate in becomes less meaningful or that it gets less value for me). But nothing like that happened now. Apparently my mirror neurons continued firing full out. I couldn’t stop the feeling in any way as if I was nothing but a robot.
In his interesting The Illusion of the Conscious Will Daniel M. Wegner defends the thesis that the conscious will is a kind of emotion, namely the emotion that we are the owner of our actions. “Conscious will is the somatic marker of personal authorship, an emotion that authenticates the action’s owner as the self” (p. 327). In short, the conscious will is an authenticy emotion. According to Wegner this makes it the basis of our idea of responsibility. “Moral judgements are based not just on what people do but on what they consciously will” (p. 335). And we feel being responsible for what we consciously do, even though in fact it may be so that the feeling may have nothing to do with the reasons and causes why we actually act, namely that there is a robot in us, as Wegner calls it, or a zombie, as I have called it in other blogs, that takes the decisions and that determines what the right actions are in the circumstances given. Then the feeling of authorship that the conscious will is has nothing to do with the steering of our actions, although the will thinks that it controls them. However, “illusionary or not, conscious will is the person’s guide to his or her own responsibility for action. If you think you willed an act, your ownership of the act is established in your mind. ... We come to think that we are good or bad on the basis of our authorship emotion” (p. 341).
Although Wegner's theory sounds plausible, I want to make two critical remarks. The first one is that Wegner’s theory of the conscious will (but many other theories of the will as well) treats the will as a short term phenomenon: the will to do an act now. However, the conscious will is more than that. It involves also planning: willing something later, in the future. I want to have good places for an opera next February and therefore I have to buy my tickets next week when the advance sale starts. This is another kind of conscious will than the feeling that it is me who wanted to write a note for it in my diary just a few seconds ago.
The other remark is this. According to Wegner, moral judgements are based on what we consciously do (see above). As Wegner had said just before: “a person is morally responsible only for actions that are consciously willed” (p. 334, italics mine). This is the foundation of Wegner’s theory of morality and responsibility. Happily for Wegner it does not have consequences for his theory that the conscious will is an authencity emotion, for what he says here is simply not true. If we were responsible only for our conscious actions, many trials and lawsuits could be skipped. However, we are often also responsible for what we did not consciously do and did not want to happen. Negligence, undesired consequences of our actions, not doing what we supposed to do ... All these things are often considered as happenings that we are held responsible for (and that we are responsible for) but that we did not consciously do. I have talked about it already before. One instance: You cause an accident with your car because you failed to give way to a car coming from the right. Did you consciously drive on without wanting to give way to the right? Of course not. Are you responsible for the accident? Of course you are. And I guess that you feel so, too.
The upshot is that the feeling of authorship of what I do is not limited to what I consciously do. This makes that I can be responsible for I what I did not consciously wanted to do. But the feeling of authorship is also not limited to what I do now or what I did just a moment before. It ranges also over what I did after deliberate planning and preparation. We simply need a wider perspective on what consciously willing is, certainly if we want to relate it to responsibility.