Viser innlegg med etiketten filosofi. Vis alle innlegg
Viser innlegg med etiketten filosofi. Vis alle innlegg

torsdag 7. januar 2016

Ready or not.

In the 1970s and 80s, Benjamin Libet, an American brain scientist, conducted a series of famous experiments. Instructed to carry out small, simple motor activities, such as pressing a button or flexing a finger, participants were placed in front of a clock with electrodes affixed to their scalps. During the experiment, the participants were asked to note the position of the arm of a clock when he or she was first aware of the urge to act. The experiments revealed an increase of electrical brain activity preceding the conscious decision to move by several hundred milliseconds. This subliminal brain activity was dubbed «readiness potential». While Libet himself has wavered somewhat in his interpretations of these findings, others unhesitatingly think this discovery has huge ramifications for our self-understanding. If volitional acts are initiated before we become aware of them, then we must be deluded when thinking our conscious «decisions» have any causal effect on what we do!

It may seem impossible to conclude otherwise. If electrical goings-on in our brans make our decisions, then in a sense we don't. But here one must not forget what a conclusion is in this context. The conclusion that we are not free is not a scientific discovery, but rather an interpretation of (or implications drawn from) certain scientific discoveries. Interpretation is a way of framing data, involving, at least tacitly, arguments based on some underlying assumptions. That human beings are mere neurological robots, therefore, is not an empirical fact, but rather the implications drawn from one possible interpretation of these facts. Am I then suggesting that this interpretation must be wrong? No. I only suggest that it is not obviously correct either. In other words, I am asking us to consider the possibility of reading Libet's data differently. How to read scientific discoveries, of course, cannot itself be a scientific question--at least not merely. This calls for philosophising.

Let us start by investigating why this gloomy conclusion may at first seem unavoidable.

Instinctively one might be alarmed to learn of the readiness potential. How unsettling that our brains decide for us! But as Wittgenstein once remarked, we often are struck by the wrong aspects of things: the important features escape our attention because of their familiarity (PI §129). Perhaps, then, the conclusion that subliminal brain activity diminishes the role of consciousness in human action seems so compelling only because we overlook something which is always before our eyes?

For example: There must be electrical goings-on in the brain all the time. Acknowledging this obvious fact should make some of the initial surprise ebb away. Electrical brain activity prior to conscious decisions is exactly what one should expect to find! After all, no brain activity is synonymous with brain death. Ok. But Libet did not merely record electrical humming in the brain—that, clearly, would be unstartling. What his experiments revealed was a significant increase of electrons firing some milliseconds prior to a conscious urge to act. Surely this warrants paying special attention to it, or am I denying that something imortant happens here? Well, not exactly. But if we take a closer look at one of Libet's graphs, will we not see the line going up and down all the time? And if that is the case, why single out this particular peak as particularly significant—rather than, say, regarding it as just another elevated stage in a normal pattern of fluctuations? Doesn't this peak seem rather randomly chosen? (Subsequent experiments have, in line with this logic, identified readiness potentials several seconds prior to any conscious urge to act.)

My intention with that rhetorical question is not to deny that a case can be made for «the compelling» conclusion, merely to emphasize that the case must indeed be made. One cannot simply take Libet’s assumptions for granted. Once his reading of the graph is seen for what it is, namely a reading, and the assumptions built into that reading are made explicit, then, my point is, we are positioned to see that other readings must be possible too.

But let us, for argument’s sake, grant Libet his reading of the graph's ultimate peak. Let us assume, then, that «readiness potential» was an established fact. Would this render conscious decisions causally ineffective in volitional human action?

Before stampeding towards that conclusion, one thing we should notice is how anemic are the concepts of action and decision with which Libet operates. A severely limited understanding of these human capacities is buit into his research design, making it unclear how much of human action actually is illuminated by Libet's research. Purposeless finger flexing and random button pushing are (at best) special instances of human action--if indeed «bodily movements» is not a better term for them than «human action». However we categorize it, the behaviour of Libet's subjects is quite unlike much of what we otherwise think of as human action. Furthermore, Libet’s experiments were purposely designed so as to make the timing of the movement irrelevant. The participants were to have no reason for preferring sooner over later. Some human actions may very well be like that—at least they were in Libet's artificial settings—but more often than not do we care about what to do and when to do it. To make a long story short: It will take a considerable amount of philosophical work on Libet’s part to make this convincing as a paradigmatic picture of human behaviour. (Which is needed for the implicit general rejection of conscious decisions to be plausible.)

And what does it mean to say that an action is «initiated»? Being clear on that point is of course crucial when reflecting on the implications of Libet's findings. The readiness potential is conceived as a brain activity initiating human action at a particular moment in time. But does it make sense (and if so, what sens does it make) to think of all volitional actions as subliminally initiated some milliseconds before they occur?

Sometimes we say that it took us time to make up our minds on what to do, suggesting a picture of a long inrun before a sudden take-off. A typical case would be the child who hesitates for a long time before suddenly ripping off a band-aid. Now, it seems plausible that certain happenings in the motor center of the brain—i.e. a «readiness potential»—can explain the sudden motion. But could it not also be argued that these electrical goings-on only mark the final stage of the initiation process? That the initiation of the initiation, as it were, began when the child first formed the intention to rip the band-aid off? Consider also planned actions. How much light does the readiness potential shed in such cases? Take, for example, someone who finally asks for his girlfriend’s hand in marriage. A lot of preparation has led up to this moment: For months he has deliberated on how best to approach it; he has considered different options for time and place—should he propose at midsummer and out at sea, or rather wait until her birthday when her favorite flowers blossom?; he has rehearsed the question (the exact formulation of which he has actually written down on a piece of paper in his pocket); he has booked a hotel suite; bought a ring, and today he has picked up a lovely bouquet of roses. If someone were to explain the time and manner in which he proposed in terms of electrical signals firing in his brain, then there are reasons—not empirical reasons perhaps, but certainly philosophical ones—for being sceptical about their approach. Of course this man would never have performed as he did but for electrical firings in his brain—but, this surely is a very thin explanation, and miles away from what we under all but very limited circumstances would consider an answer to the question of what prompted him to propose when and how he did.

fredag 5. september 2014

Philosophy in Science.

Lately we have seen many heated debates between philosophers and scientists about the relationship between science and philosophy, very often framed as the question what, if anything, philosophy has to offer science. Some prominent scientists take an extreme stance when declaring all philosophical questions as little more than “pointless delay in our progress,” to quote Neil deGrasse Tyson. Massimo Pigliucci does a good job unmasking the naïve philosophical (!) presumptions such views rest on. Other scientists reject philosophy, believing philosophers to cook up pseudo-scientific alternatives to scientific methods. But this is clearly at odds with what most philosophers are actually doing, and a profound misconception of the relationship between philosophy and scientific methods. A while back Peter Hacker published an essay (on which I commented here), where he addressed some of these issues. Far from offering competing explanations of natural phenomena, Hacker wrote, philosophy is rather:
…a technique for examining the results of specific sciences for their conceptual coherence, and for examining the explanatory methods of the different sciences – natural, social and human. The sciences are no more immune to conceptual confusion than is any other branch of human thought. Scientists themselves are for the most part ill-equipped to deal with conceptual confusions.
Scientists, of course, don't deny the importance of conceptual clarity. We need to know what we mean by our words in order to speak rationally. However, some don't see what philosophy has to do with it. Some argue, as Julia Galef did in an episode of Rationally Speaking, that scientists are quite capable to manage on their own:
There is an irritation on behalf of scientists or science enthusiasts, that philosophy is defending its relevance by defining as philosophy things that would have happened even without the discipline of philosophy; that there is a certain level of built-in and developed common sense and critical thinking that scientists would have even if they hadn't read any philosophers or come into contact with the field of philosophy, and to say that philosophy therefore is relevant is unfair.
This irritation is understandable. Claiming that scientists are unequal to their tasks, or even that it is the task of philosophers to tell scientists what they can and cannot do, is unlikely to find much support in the scientific community (which hardly was Hacker's aim either). Of course, describing philosophy as a nuisance isn't exactly an invitation to a calm discussion either. My aim here is not to take sides. I am rather suggesting that if we all take one step back, we will perhaps see this trench war as misguided.

Often the question is: What, if anything, can scientists gain from reading or listening to philosophers? Philosophers sometimes reply with a history lesson. A few hundred years ago all scientists were philosophers. So, until quite recently it would have been literally senseless to ask why scientists should bother with philosophers. And in recent years many great scientists have been philosophically inclined. During the twentieth century, some of the towering figures in physics and biology (Einstein, Bohr and Heisenberg, and Ernst Mayr and Richard Lewontin, for example) were well-versed in the philosophical literature of the day and thought this crucial to their own research. Even today scientists from various fields collaborate with philosophers. Hence: If nothing else, it is at least not a universally shared opinion among scientists that talking to philosophers is a "pointless delay in [their] progress". But for argument’s sake, let us assume, contrary to the facts, that this was what most scientists thought. What then? What if scientists entirely quit reading and listening to philosophers? Some philosophers seem to believe that this would result in science becoming a vessel without its pilot, forever doomed to sail round in circles in confused and muddled thinking. That seems a wild assumption. What then about the opposite assumption? Say that science were unaffected by this radical division between "the two cultures". Would this support the conclusion that philosophy is indeed irrelevant to science, as Julia Galef suggested?

That's not simply a questionable inference. Not only does the conclusion not follow, the conclusion is itself curiously incoherent. The reason, I think, is that Galef confuses two separate questions. Suggesting that philosophers are irrelevant to scientists is one thing. Suggesting that philosophy (i.e. philosophical thinking or philosophizing) is irrelevant is quite another. The first is a question of who should (or could) do the work. The second question is about what kind of work needs to be done. Galef may be right in assessing that scientists for the most part are capable of doing the conceptual and critical thinking their research requires even if they don’t read philosophical journals. This is an empirical question. But suggesting that altered reading habits among scientists could possibly make philosophical reflection irrelevant in science doesn’t even make sense. (Galef does in fact take this very distinction for granted herself when she claims that scientists can solve these puzzles without any knowledge of the field of philosophy.)

Settling the largely empirical question (around which much of the debate revolves) regarding who's best equipped to deal with conceptual confusions seems to me both trivial and unimportant – so long as they who end up doing the philosophically needed work (whatever their profession might be) do so properly.

Here I am of course doing exactly what Galef accuses philosophers of doing, namely defining as philosophy things that even people outside the field of philosophy are capable of doing (more or less successfully). But there is no need for irritation any more. Calling certain difficulties scientists inevitably are faced with in their daily work “philosophical difficulties” is not a strategy to lay claims on these difficulties on behalf of trained philosophers. The subtext is not: Amateurs aside! Such union disputes don’t interest me. (If someone objects to my using the word "philosophy" here -- why not stick with "critical thinking" if that is what you are talking about? -- my answer is that "philosophy" allows for distinctions to be drawn: Not all forms of 
critical thinking or conceptual self-reflection are philosophical. Criticizing concepts for being used in unfamiliar ways aren't, for instance. When that is said, though, I do concede that what word we use is relatively unimportant, as long as we are clear on what we are talking about. (As "philosophy" often denotes more than critical thinking too, I guess that that label might cause confusion too.)) My point is simply that all good and honest scientific research involves different modes of thinking, including (sometimes) what is commonly called philosophical reflection.

onsdag 14. mai 2014

Ordinary critical intelligence.

Are there any readers of this blog who don't also read Language goes on holiday? Well, stop being in that category, and go check it out! In any case, this post is meant for you. I was thinking of writing a follow-up to my previous entry, but then gave away my good points commenting over at Duncan Richter’s. Rather than rewriting my arguments, I decided simply to post a gently edited version of the comments here.

 
***

Richard Taylor has written:
Students of philosophy learn very early -- usually first day of their course -- that philosophy is the love of wisdom. This is often soon forgotten, however, and there are even some men who earn their livelihood at philosophy who have not simply forgotten it, but who seem positively to scorn the idea.
I was, when writing that previous post, hoping to make use of this quote but in the end deiced to drop it because I didn’t know what to do with it. I have myself never heard anyone profess such disdain, but have attended lectures where this wouldn’t have surprised me much. This attitude seems to me connected with the danger of dogmatism. One form of dogmatism which concerns Taylor is the idea that philosophy really is (or should become) like the sciences. When Peter Hacker presents philosophy as a set of techniques, this sounds too mechanical, as you [D.R.] write, but doesn’t it also, and not incidentally, suggest a model of philosophy rather too close to that of the sciences? (This is surprising because Hacker too, both previously and again in this essay, has been fighting this very model.)
 
Academic philosophers sometimes feel a need to defend their subject, which is easily understood given the worldwide trend of cut-backs in the ”unprofitable” humanities departments. However,  inflated rhetoric is hardly the best way to make non-philosophers see things differently. As Miranda Fricker once remarked: "I think it is a bit ludicrous when people defend philosophy on the grounds that it teaches you how to think. That is extraordinarily insulting to other subjects!" This is partly why I too react against such claims. But I also think philosophers, with such claims and claims about the expertise a philosopher acquires, give the wrong impression of what philosophical thinking actually is.
 
Historians of philosophy often regard the subject as to have been invented by the ancient Greeks. When one's objective is to trace the understanding of philosophy as a more or less academic discipline with theoretical ambitions back to its origin, this story seems about right. However, philosophy has other (and deeper, yet more mundane) roots too. I am inclined to see philosophizing as a natural feature of human language use. Questions like “What do you mean?” and “What are the grounds for that claim?” were after all not invented by Thales. Nor are they something we first encounter at university. Someone might question our words whenever we say anything. Thus can the most casual dinner table conversation suddenly transform into a discussion or a probing investigation into the structure of our consepts. (The tools needed to resolve such situations are not theories produced at philosophical institutes, but ever-present to all competent language users in the language we share.) Philosophy -- understood as the application of ordinary critical intelligence -- is as ancient and as evenly distributed as language itself -- though some do of course exercise their critical faculties more than others.
In short, philosophers don't really do anything that non-philosophers can't do, and they don't necessarily do it better, but they ought at least to do it better than they themselves did it before they started studying and practicing philosophy, and they ought to do it without some other mission. [D.R.]

Agreed. Still, philosophers are often asked to sit on expert panels. In Norway, Knut Erik Tranøy headed several committees on medical questions; Mary Warnock has done the same in England. As far as I am able to judge, both have done great jobs; but, frankly, I believe this is more thanks to who they were and their personal characters than to their educational background. This issue has been at the front of my mind lately because I currently am in the middle of the process of applying for a position as a researcher in bioethics at my old university. If I am qualified for this job, which I think I am, this is not because I possess any philosophical (or ethical) expertise (whatever, if anything, that is); but rather because I have an interest in that field, have read a fair amount of the literature, both good and bad, and because I care about finding out which is which.

fredag 9. mai 2014

Why study philosophy?

In a recent essay, Peter Hacker gives many good answers; but as is often the case with advertisements, he over-sells his product. I will be focusing on this formulation:
The study of philosophy cultivates a healthy scepticism about the moral opinions, political arguments and economic reasonings with which we are daily bombarded by ideologues, churchmen, politicians and economists. It teaches one to detect ‘higher forms of nonsense’, to identify humbug, to weed out hypocrisy, and to spot invalid reasoning. It curbs our taste for nonsense, and gives us a nose for it instead. It teaches us not to rush to affirm or deny assertions, but to raise questions about them.
Similar claims about the general usefulness of philosophy are endlessly repeated in introductory texts. (For instance by Bertrand Russell in the raving last chapter of his Problems of Philosophy: "The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the cooperation of consent of his deliberate reason.") I must admit scepticism.

When Hacker writes that 'philosophy cultivates a healthy scepticism' and 'teaches one to detect higher forms of nonsense', he makes it sound as if taking philosophy classes were a method for developing fail-safe nonsense-alerts. Because philosophy means love of wisdom, and wisdom is never foolish or gullible, there is in that sense an intimate connection between "philosophy" and "critical thinking". Empirically speaking though the connection is less reliable. Great philosophers have been guilty of great stupidity. Heidegger famously was a Nazi. I doubt that Wittgenstein shared Hacker's optimism; but when hearing one of his students unreflectingly repeating nationalistic slogans, Wittgenstein was infuriated, and his anger seems somehow to have been aggrevated by the fact that the person talking nonsens was a philosopher:

[W]hat is the use of studying philosophy if all that it does for you is to enable you to talk with some plausibility about some abstruse questions of logic, etc., if it does not improve your thinking about the important questions of life, if it does not make you more conscientious than any... journalist in the use of the dangerous phrases such people use for their own ends.
Studying philosophy would be a waste of time if all it did was to enable one to talk about abstruse questions of logic. But there is no reason to think this is so. If anything, evidence seems to point in the direction of Hacker. Philosophers seem better equiped than most when it comes to general reasoning skills. Philosophy majors tend to do very well on certain tests. Some claim this is due to their education: "[P]hilosophy majors develop problem solving skills at a level of abstraction" that cannot be achieved through most studies. But if we assume (which seems plausible to me) that philosophy mainly attracts students who already possess certain skills and interests, this cannot be the final word. It doesn't follow from this that exercising one's philosophical muscles might turn out to have no effect on a person's ability to think (that would be as astonishing as if one's stamina could never be improved by physical excercise); but it does follow that even if philosophers demonstrate first rate reasoning skills, it is an open question to what extent these test results actually reflect the learning outcome of their studies.

Continuing on a semi-empirical line.
Most courses in philosophy, certainly most courses an undergraduate is likely to attend, are designed not to make the student better at reasoning in general, but to make him better at philosophical reasoning. A course in moral philosophy, say, is deemed successful not to the extent its students have become more sensitive and morally reflective persons (though this of course would not be negative), but, borrowing Wittgenstein's prase, to the extent the students have learned to talk with plausibility about abstruse questions in ethics. I don't think there is anything inherently wrong with this. Philosophising is after all working with philosophical questions. As any academic field, philosophers have manufactured tools and techniques suited to these questions. Picking up on the jargon is the first step toward making contributions to the classroom discussions. And sometimes this will prove useful in other contexts too...
 
But reading philosophy and acquiring the analytic and argumentative tools on offer is, as demonstrated by Erasmus Montanus, not the same as becoming a clearheaded thinker. Mastering a philosophical style, may even -- if it is true that certain philosophies offer nothing but fashionable nonsense -- have quite pernicious effects on one's judgement. Not even (mainstream) analytical philosophy is what Hacker has in mind when he hails philosophy as "a unique technique for tackling conceptual questions". Judging by his many heated debates with colleagues, mainly from the anglo-american tradition, it is reasonable to interpret the quote with which I began as deliberately echoing a sigh by his friendBede Rundle"Whatever their limitations, earlier analytical philosophers had at least a nose for nonsense. Sadly, so many philosophers today have only a taste for it."

It is puzzling that Hacker, throughout this essay, keeps using "philosophy" as if it denoted one uniform activity ("At a very general level, it [philosophy] is a unique technique for tackling conceptual questions that occur to most thinking people" and "At a more specialised level, philosophy is a technique for examining the results of specific sciences for their conceptual coherence," and so on), when he clearly would agree with much of what I have written. The reason, I suspect, is that Hacker, as Wittgenstein often did, uses "philosophy" to refer not to everything going by that name, but mainly to his own practice. In that case his claims seem on safer footing. Hacker's texts are predominantly critical, and his ability to sniff out philosophical nonsense is (usually) impressive. Studying his philosophy -- or wittgensteinian philosophy generally -- and acquiring some of his tools and techniques will be good for any critical thinker.

But in the end, though, which philosophical texts one studies (or if they are philosophical texts at all) is less important for one's ability to think straight than how one studies them. Reading even the most conceptually self-conscious and critical writer won't make critical thinkers out of us unless we read him critically.

søndag 6. oktober 2013

Grayling on Wittgenstein.

I finally read A.C. Grayling's book on Wittgenstein. Having heard rumors about it, my expectations were not too high. Grayling's presentation of Wittgenstein's thinking is never truly deep. But as far as a very short introduction goes, his explanations of the private language argument, rule-following and so on are detailed enough. However, there is an undercurrent of hostility skepticism running through the book, which surfaces when Grayling, on the concluding pages, launches a series of objections to "Wittgenstein's conception of philosophy and his method of doing it" (p. 132). I could make this a short blog post by simply professing my agreement with this rather sour review on Amazon, but I will elaborate a little on it.

Wittgenstein's philosophy is not beyond criticism, of course, but Grayling's critiques seem to grow out of a misunderstanding of that philosophy.
[P]hilosophy is in Wittgenstein's view a therapy; the point is to dissolve error, not to build explanatory systems. The style is accordingly tailored to the intention. It is vatic, oracular; it consists in short remarks intended to remedy, remind, disabuse. This gives the later writings a patchwork appearance. Often the connection between remarks are unclear. There is a superabundance of metaphor and parable; there are hints, rhetorical questions, pregnant hyphenations; there is a great deal of repetition....Wittgenstein's style is expressly designed to promote his therapeutic objective against the 'error' of theorizing (p. 132).
As a description of Wittgenstein's conception of philosophy and his method of doing it, this isn't too off. But readers are, according to Grayling, best advised to ignore these aspects of Wittgenstein's thinking. His programmatic remarks about philosophy, his "own official avowals about therapy and the avoidance of theory" (p. 133) are deceptive. Wittgenstein denies that his writings contain systematically expressible theories, "[but] indeed they do" (p. viii). A careful examination of his scattered remarks will uncover a philosophical theory of meaning and language with "an identifiable structure and content, even if neither, in their turn are as transparently stated and as fully spelled out as they might be" (p. 133). This conclusion, however, is possible only by doing substantial violence to Wittgenstein's texts. But this is a consequence Grayling is ready to accept, as he finds no merit in Wittgenstein's writings as such: they fail in a major philosophical duty: "namely, to be clear" (p. 133). Wittgenstein's organization of his thoughts is obscuring rather than illuminating their philosophical content. Not only are his writings summarizable "but in positive need of summary" (p. viii).

There is one way of taking this as a charitable interpretation of Wittgenstein. When someone rambles, one should do one's best to make out what he is rambling about. From a different perspective, however, this is entirely misplaced charity. Taking Wittgenstein seriously as a philosopher, requires taking his writings and the conception of philosophy they express seriously too. Language sometimes confuses us. Often we react by searching for order in the complexity. But this is confused too. Order is not what we need (nor is it to be found). The solution is getting an overview. Hence, Wittgenstein's writings are designed to ease the grip this and other deep-rooted philosophical ideas have on our thinking about language and the world, not by replacing these ideas with new ones, but rather by making their status as metaphysical ideas perspicuous to us. If we think there must be something common to everything called "games", or else they would not all have the same name, Wittgenstein's suggestion is: Don't think, but look! (PI, 66) When philosophers use a word -- "knowledge", "being", "object", "I", "proposition", "name" -- and try to grasp the essence of the thing, he encourages us instead to ask if the word ever actually used in this way (PI, 116). When our thinking ties itself up in philosophical knots, what we need is not another theory, for theorizing is often what gets us into trouble in the first place, what we need are methods for untying these knots.

Hans Sluga (whose latest Wittgenstein book I also read this summer) agrees with many of Grayling's descriptions of Wittgenstein's writings. But he makes something entirely different of them:
Wittgenstein covers an exceptionally wide range of philosophcal and quasi-philosophical matters and ... he manages to speak about them with an unusual freshness, in a precise and stylish language, often with the help of surprising images and metaphors. This has suggested to ... a group of readers that what is of greatest interest in Wittgenstein's work is the manner in which he engages with philosophical questions. On this view, Wittgenstein teaches us above all some valuable methodological lessons (p. 16).
At one point, Grayling calls this "a neat apology for obscurity". Further down the same page, however, he suggests:
Perhaps the value of Wittgenstein's work lies as much in its poetry, and therefore its suggestiveness, as in its substance. There is no doubt that in this respect Wittgenstein's work has stimulated insights and fresh perspectives, especially in philosophical psychology, which have helped to advance thought about these matters (p. 133).
At first blush there seems to be a tension here. If Wittgenstein has helped advancing thought, he has done so by helping us see our thinking afresh. Descartes' cogito argument, for instance, troubled Western philosophers for centuries. How could we possibly break out of the prison of our own minds? The so called private language argument doesn't solve this problem, but if it convinces us that the question is confused, the argument might dissolve the problem for us. By curing us from confused thinking, a successful Wittgensteinian "therapy session", one might argue, results in the exact opposite of obscurity. But Grayling doesn't think so. On his view, philosophy (unlike therapy) is not simply combatting wrong perspectives on things, but also constructing explanatory thought-systems. And it is of course true that Wittgenstein's writings seem obscure when read as attempts to rise to these demands. However, as I have argued, I believe Grayling is wrong in assuming that Wittgenstein (contrary to everything he writes) is trying to answer to these demands.

Here I am not arguing that all philosophy should be conducted in the manner of Wittgenstein (in a sense that would be impossible: if we were never tempted to theorize, "therapeutic" philosophizing would be superfluous too). What I can offer, though, is an example of how such philosophizing might work. Grayling writes that...
... it is a mistake to suppose that reminding ourselves of the main uses of words like 'good' and 'true' is enough, by itself, to settle any questions we might have about the meaning of those terms. Indeed, it is notoriously the case that question about goodness and truth, which are paradigmatically large philosophical questions, cannot be resolved simply by noting the ways 'good' and 'true' are as a matter of fact used in common parlance -- that is, in the languagegames in which they typically occur. It would seem to be an implication of Wittgenstein's views that if we 'remind' ourselves of these uses, philosophical puzzlement about goodness and truth will vanish. This is far from being so (p. 115).
When someone asks what "good" means, a Wittgensteinian would answer with a question: "What particular use of the word 'good' are you thinking about?" The meaning of "good" depends on whether you are thinking of a good taste, a good night's sleep, a good footballer, a good deed, or a good person. Forcing you to reflect harder on what you meant, this challenge might convince you that your initial question was confused. On the other hand, this needn't work, because you might, as Grayling suggests, just as well rephrase you question: "Not 'good' used in a particular way, but goodness as such." This, of course, is the kind of philosophical puzzlement Wittgenstein's "therapeutic method" is designed to combat. The fact that such reminders don't always work certainly is no proof that Wittgenstein's conception of philosophy and his manner of doing it is wrong. It only proves that his therapy doesn't always work. And there is no problem with that. Because Wittgenstein never said, as Grayling has him saying, that reminders about ordinary language use by themselves could make philosophical puzzlements go away. In addition one needs the will to receive these reminders in the right spirit. Philosophy, on Wittgenstein's account, is a fight against one's own temptation to view things in a certain way. It is not a given how that fight will end.

torsdag 4. juli 2013

What I Draw from Drawing.

As you may have noticed, lately I have posted some of my own drawings on this blog. (I have done this once or twice before, even linked to my never-to-be-completed web page my "art".) Even though this is little more than showing off, or making a fool of myself, as the case may be, I have discovered that I enjoy seeing my doodles out there, so I may continue submitting sketches and drawings to my regular posts in the future.

What then do I draw from drawing? I am not sure, exactly, apart from the fact that I love doing it. But give me a little time, and I should be able to come up with some intellectually more respectable reasons for yielding to my lust. Drawing clearly has to do with perception. Drawing is mostly seeing correctly. When I am sketching, especially when drawing from life, I am concentrating on perceiving only what I perceive, not what is supposedly there. This exercise may have some spill-over effect to my other interests. Drawing sometimes feels like fighting certain temptations, not unlike doing philosophy. Wittgenstein's warning, "Don't think, but look!" (PI 66) is just as useful to an artist as it is for the philosopher. Don't think that this a hand, and that hands have five fingers on them, but look -- study the shape of the object in front of you, how does it appear from this particular perspective, don't draw what is hidden from view, how many fingers do you actually see, and so on. Drawing, therefore, is learning to see. You learn to trust your own eyes -- not blindly(!), but because you know you have made your vision more reliable through hours of concentrated practice. Kids running around can of course be disturbing. This kind of disturbance, however annoying it may be, is not really why concentration is essential to drawing, however. I am thinking more of silencing my own voice than shutting out those around me. Again, there is a similarity with philosophizing. Drawing too, some say, is a quest for understanding. This is often true, I think -- and as with philosophical understanding what is required is not so much analyzing tools and a talent for categorization as a simple will to listen. A good drawing session has the form of a conversation. The draughtsman too has things to say, obviously, but there is always the danger of becoming a talkative know-it-all who doesn't take other opinions seriously. Sometimes we scrutinize someone's ideas in search for symptoms. This might result in a diagnosis. That is what understanding someone means in psychiatry. But this is not conversing. The understanding of someone that might come from a genuine conversation, i.e. when we are tuned in to each other the right way, is more akin to becoming familiar with each other, getting, as we say, to know that other person. When drawing, particularly when drawing from life, I am, in similar fashion and for similar reasons, trying to calm down my own voice, telling me this and that about whatever I am looking at, in order not to interrupt the object in front of me. Another name for this efferent concentration, the other-directed concentration I am aspiring to when drawing from life, is attention. "Attention," according to Simone Weil, "is the rarest and purest form of generosity." So there you are. Yielding to a lust contributes to my virtuousness!

Enough rambling!

This will never become an art blog, let alone a blog devoted to investigations of the act of art making, because, as you will appreciate by now, I have a hard time expressing (in understandable terms) what drawing is, what it means to me and what one can learn by it. One who manages to just that is John Berger. He truly knows what he is talking about, both as an artist and as an art historian, and his writings are always eloquent and a pleasure to read. In particular I have enjoyed, obviously, Berger on Drawing. His book About Looking, which is less devoted to drawing, but discusses photography, perception and art in more general terms, is also inspiring -- and it opens with the, by now, classic essay "Why Look at Animals". Ways of Seeing, first published in 1972, has been highly influential in that it focuses on, and to some extent has altered, how we look at pictures. I have yet to read Bento's Sketchbook, with the subtitle "How does the impulse to draw something begin?", but as it promises reflections on sketching soaked with philosophy, Baruch (or Bento) Spinoza's in particular, the book sounds almost too good to be true: "Bento's Sketchbook is an exploration of the practice of drawing, as well as a meditation on how we perceive and seek to explore our ever-changing relationship with the world around us."

There is any number of good how to-books out there, books that teach you different techniques, what pencils and brushes to use, how to achieve certain effects and so on; but the best book I know of that aspires to teach people to use their eyes, is Betty Edwards' book Drawing on the Right Side of the Brain. I bought my copy on a visit to the US many years ago, and made good use of it. This is very much an anyone-can-learn-how-to-draw-properly kind of book. Like learning how to read, it doesn't require any special gift, nor is there any mystery about it -- it is just a matter of learning to do certain things the right way. As the title suggests, the book is based on some theory about the two brain hemispheres having different capabilities, one side specializing in linguistic tasks, the other in visual-spatial. Edwards makes claims to the effect that the linguistic side is too dominating, and that this is what needs fighting if we are to learn how to draw anything but stylized images of things. I don't know how well supported these theories are, and frankly it doesn't matter. Who cares whether the theory is true or false, so long as the treatment works? I always thought the theoretical parts of the book unnecessary. You do not need to tell a child anything about its brain for the reading exercises work. In Norwegian the book is titled Å tegne er å se, which translates as "Drawing is seeing", and that is quite enough for me.

Here is a youtube video based on the book:


torsdag 11. april 2013

Goldielocks Universe.

Before the Big Bang no-one would have guessed that life would ever evolve. For life to evolve innumerable things had to turn out exactly as they did. According to science, the tiniest alteration of just one of many crucial parameters, and life as we know it would never have existed. Some calculations indicate that the force of gravity must be accurate to one part in 1040 in order for this to happen. "It's as if there are a large number of dials that have to be tuned to within extremely narrow limits for life to be possible in our universe. It is extremely unlikely that this should happen by chance," according to Alvin Plantinga, "but much more likely that this should happen if there is such a person as God."

This is sometimes known as the design explanation for the fine-tuned universe. A life supporting universe is intrinsically unlikely, or so it is argued. But if there is an intelligent Creator, then that would explain everything. Random chance would only raise the question as to why this universe could be so "lucky" as to have precise conditions that support life (at least here and for the time being). But if everything were designed according to some intelligent plan, then this mind boggling precision would be exactly what to expect.

A rival explanation is the Multiverse-hypothesis, according to which there is a whole bunch of universes -- not just galaxies within our own universe, but complete universes. Given a string of universes, one would expect the various combinations of parameters for basic physical factors to show up in endless combinations. That one combination is suitable for life should not surprise anyone."If there is a large stock of clothing, you're not surprised to find a suit that fits," in the words of Martin Rees. "If there are many universes, each governed by a differing set of numbers, there will be one where there is a particular set of numbers suitable to life. We are in that one."

Replies Plantinga:
Well, of course our universe would have to be fine-tuned, given that we live in it. But how does that so much as begin to explain why it is that [our universe] is fine-tuned? One can't explain this by pointing out that we are indeed here—anymore than I can "explain" the fact that God decided to create me (instead of passing me over in favor of someone else) by pointing out that if God had not thus decided, I wouldn't be here to raise that question. It still seems striking that these constants should have just the values they do have; it is still monumentally improbable, given chance, that they should have just those values; and it is still much less improbable that they should have those values, if there is a God who wanted a life-friendly universe.
There are difficulties here. Whether or not this God-hypothesis is more probable than other explanations depends, as John Perry argued in a teaser for a recent Philosophy Talk, on what is required for the existence of such a God. "Wouldn’t that in turn require the existence of a Creator-friendly universe, or proto-universe, with parameters set to allow for the development of such a powerful and wonderful Being, capable of setting the parameters for our universe?" A clever question, no doubt; but I am sure Plantinga would have an answer ready and argue that intelligent design nevertheless is the more plausible hypothesis. I am not writing this to take sides. In fact, I am not sure what the discussion is all about. Arguing over which explanation (for why the universe is suitable for life) is more probable, looks like an argument where language has gone on holiday. Probability is something we normally talk about within the universe -- when unlikely things happen, we ask for explanations -- but here the question is how probable the universe itself is? How unlikely is a life-supporting universe? And how unlikely is that to develop by sheer coincidence? Unlikely, compared to what?

I am not sure I see why there is something here to explain either. Consider the following quote from Bill Bryson's A Short History of Nearly Everything:
To be here now, alive in the twenty-first century and smart enough to know it, you also had to be the beneficiary of an extraordinary string of biological good fortune. Survival on Earth is a surprisingly tricky business [...] Not only have you been lucky enough to be attached since time immemorial to a favoured evolutionary line, but you have also been extremely -- make that miraculously -- fortunate in your personal ancestry. Consider the fact that for 3.8 billion years, a period older than the Earth's mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life's quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of hereditary combinations that could result -- eventually, astoundingly, and all too briefly -- in you.
Of this, I am tempted to say, if things had gone differently, I simply wouldn't have been here to raise the question at all. I fully agree that that doesn't explain anything, but I do not share Plantinga's (and other's) need for an explanation. Reading Bryson's tale fills me with amazement too, momentarily at least. Of all the things that could have gone wrong, none did! It is incredible that I got here at all, yet here I am! But there seems to be a certain (mischievous?) picture here -- of a tiny charge of genetic material that has made it safely through 3.8 billion years of continuous narrow escapes in order to make me. This looks like quite an accomplishment. When a package reaches its final destination unscathed, despite having faced earthquakes, avalanches, blizzards, mine-fields and so on, then that might make us look for an explanation. Calling it chance will hardly satisfy our need. (That's because chance isn't really an explanation at all, but rather something we appeal to when there are none.) Intelligent design (or Providence or Destiny) might look like much better options.

However, I cannot settle with this picture, mainly, I guess, because I cannot see any sense in saying that if things had been a little different and my parents hadn't met, then I would have been a package forever lost in the mail, as it were. One can say (and some do), how amazing it is that my mother, of all the men in the world, happened to meet my father. What were the chances!? But to me this sounds confused. It does make some sense to praise yourself lucky that your parents met and fell in love. But the sense is not that it would have been very unfortunate if they hadn't met, because then you would never have been born. One cannot say, bad luck for all those whose parents never meet.(Though that seems to be exactly what Plantinga is saying, when claiming that God, in creating me, somehow decided not to pass me over in favour of someone else.) When one say, how very lucky I am to be alive, this is normally a way of expressing one's gratitude, not a probabilistic judgement.

Your parents did meet, of course. Good for you, but is it more than that? Is it amazing? Today your father, of all the people in the world, sat next to this particular odd person on the bus. Why should the first be more in need of an explanation than the second? The first strikes you as wondrous because it was crucial for your existence. But if your father hadn't met your mother on that day, say, if he had gone to the movies instead of the beach, possibly to fall in love with someone else, and father, not you (you simply wouldn't exist) but someone else. Would that have been amazing too?

onsdag 6. februar 2013

“Er språk til å snakke med?”

"Tydinga [av orda] tillet så mykje tvitydigheit at språket blir lite eigna til kommunikasjon mellom menneske," skal Peter Svenonius, lederen av Centre for Advanced Study in Theoretical Linguistics, ha uttalt, i følge Jan Terje Faarlund. (Klassekampen 26. januar) I visse fagmiljøer, som dette miljøet i Tromsø, har en teori om at språket først og fremst er til for å tenke med og kun sekundært er et kommunikasjonsmiddel etterhvert fått stor støtte. Teorien er omstridt, men, skriver Faarlund, det finnes en del forhold som tyder på at den er korrekt. Svenonius’ utsagn er “nok er ei overdriving eller i beste fall ei spissformulering,” men sier likevel noe viktig og riktig om språk og grammatikk:
Vi har ei mengd grammatiske reglar og prinsipp som vi følgjer automatisk når vi talar, men som er dysfunksjonelle og totalt unyttige frå ein kommunikasjonssynstad. Ei setning som Eg kjøpte øl kan vera svar på Kva kjøpte du? Men til svaret Eg kjøpte øl og vin har vi ikkje det tilsvarande spørsmålet Kva kjøpte du øl og?
Det er faktisk ikke lenge siden jeg stilte min kone nettopp det spørsmålet (skjønt jeg ville transkribert det slik: Hva kjøpte du, øl og...?), så Faarlunds valg av eksempel på et unyttig prinsipp er kanskje ikke helt heldig. Vi unnlater automatisk å sammenstille ord på meningsløst vis. Jeg forstår ikke at dette er så dysfunksjonelt, hvis det er dét Faarlund mener. På hvilken måte hadde det vært nyttig å kunne kommunisere ved hjelp av meningsløsheter? Men Faarlund mener kanskje at slike setningskonstruksjoner ikke burde vært meningsløse; at det hadde vært praktisk om de faktisk ga mening -- men nøyaktig hvilken mening burde de ha gitt? Faarlund fortsetter:
Dette gjeld ikkje berre norsk, men alle språk vi kjenner til. På norsk og ein del andre språk kan vi flytte noko ut or ei leddsetning og setja det fremst i hovudsetninga, som for eksempel Eit slikt tilbod ville eg bli svært glad viss eg fekk _ (her er det framflytte leddet understreka, og den "eigentlege" plassen til det er markert med ein strek). Dette er jo svært praktisk dersom vi vil framheve noko i ein samtale.
Men korfor kan vi ikkje da også seia Det tilbodet vil eg halde fram i jobben min dersom eg ikkje får _. Det skulle jo vera like praktisk. Språket er underlagt ei mengd slike restriksjonar som ikkje har nokon kommunikativ funksjon.
Første gang jeg leste dette fant jeg det rimelig å hevde aldri kan si noe sånt (eller vi kan selvsagt si “Det tilbodet vil eg halde fram i jobben min dersom eg ikkje får,” men ikke kommunisere noe med det), men til forskjell fra Faarlund så jeg ikke hvordan dette hadde være praktisk. Det finnes ingen klar grense mellom grammatiske feil og tankefeil. Ofte uttrykker vi oss klart og tydelig med ord og vendinger som norsklæreren ville ha pirket på. Sløyfer subjektet for eksempel. Setninger uten verb. Men grammatiske forseelser visse meningsløsheter i resulterer. Hva ville være så praktisk med å kunne sette sammen ordene akkurat som det passet oss!? Ved annen gangs gjennomlesning har jeg en annen innvending. Er det så sikkert at vi aldri vil kunne si “Det tilbodet vil eg halde fram i jobben min dersom eg ikkje får”? Setningen virker tullete -- det er også Faarlunds poeng -- men spørsmålet er hvorfor. Virker setningen tullete fordi den bryter med visse språklige restriksjoner, slik Faarlund antyder, eller virker den tullete fordi vi ennå ikke har forestilt oss en kommunikativ sammenheng der setningen kunne være nyttig? Ta en setning som i utgangspunktet virker mer tilforlatelig -- spørsmålet “Kva kjøpte du?” Spørsmålet virker mer tilforlatelig ettersom det er grammatisk velformulert, men må vi ikke vite hvilken sammenheng spørsmålet stilles i for å vite hva spørsmålet betyr (eller om det betyr noe som helst), hva det spørres etter og hva det korrekte svaret vil være? Hvis jeg kommer direkte fra butikken med fulle bærposer, kan svarte være at jeg kjøpte øl og vin. Men hvis jeg har vært på skitur, eller kommer rett fra dusjen, er det ikke sikkert jeg forstår hva du mener. Hvis du roper ut ordene i søvne, ville jeg kanskje si at spørsmålet ikke var et spørsmål engang. Og hva om en vilt fremmed person griper fatt i deg på gaten, stirrer deg inn i øynene og spør “Kva kjøpte du?”? -- Korrekt grammatikk er med andre ord ikke (alltid) avgjørende for hva vi kan og ikke kan si. Det faktum (hvis det er et faktum) at ingen sier “Det tilbodet vil eg halde fram i jobben min dersom eg ikkje får,” skyldes ikke språklige restriksjoner. Vi unnlater å si slikt så lenge vi ikke har bruk for utsagnet. Men vi kan jo utmerket tenke oss setningen brukt som en kode eller som et passord, for eksempel, eller at setningen var en gåte, eller at den ble brukt som eksempel i en språkfilosofisk utreiing.

Men jeg tror ikke dette vil gjøre særlig inntrykk på Faarlund. Slike innvendinger er Faarlund vant med fra humanister. Forskere kan nærme seg språket på to måter, med et ytre og et indre perspektiv:
I det ytre perspektivet ser ein på språk i høve til sosiale og geografiske faktorar, og i høve til ulike sjangrar og bruksmåtar. I det indre perspektivet ser ein på språk som ein kognitiv eller mental gjenstand som set menneska i stand til å produsere og forstå ytringar. Forskningsobjektet finst dermed i hjernen hos den enkelte språkbrukaren, og forskninga går ut på å forstå kva slags fenomen menneskespråket er, og kva som gjer at vi er den einaste arten i dyreverda som har eit slikt språk.
Det indre tankespråket har vi “sjølvsagt ikkje direkte tilgang til”. Derfor stiller ikke Faarlund seg avvisende til det ytre perspektivet. Måten språk faktisk brukes på er den viktigste kilden til forskningsdata for den som interesserer seg for tankespråket. Men hvorfor anta at denne kognitive gjenstanden overhodet eksisterer? Fordi den må finnes! “Dette perspetikvet er … nødvendig for å forstå korleis barn lærer språk så tidleg og så raskt -- og utan systematisk opplæring.” Uten en eller annen kognitiv eller mental gjenstand i hjernen på den enkelte språkbruker hadde språkinnlæring vært en umulighet.

Wittgenstein gir en alternativ beskrivelse av hvordan språkinnlæring foregår:
Språkspillets opphav og primitive form er en reaksjon; det er først utfra denne at de mer kompliserte formene kan vokse… Jeg vil si at språket er en raffinering, “I begynnelsen var handlingen”. (Filosofi og Kultur s. 67)
Språket vokser ut av vårt førspråklige reaksjons- og handlingsmønster. Når spedbarnet er sultent, vrir det seg i sengen og gråter. Senere lærer barnet seg å peke på melkeflasken. Det lærer å si “Mat!”. Senere lærer barnet å si at det begynner å bli sultent, og snart må tenke på å middag, men at sulten ikke er prekær så det kan dra hjem å spise. Wittgenstein beskriver språkinnlæring som en prosess der vår primitive eller dyriske atferd gradvis raffineres og (delvis) erstattes av en annen, nemlig verbal, atferd. Utgangspunktet deler vi med mange andre dyr. Katten mjauer også ved kjøleskapet eller setter seg ned ved matskålen, omtrent slik småbarn gjør. Forskjellen er, som Norman Malcolm skriver et sted: “In the case of the infant, words and sentences will gradually emerge from such behavior. Not so with the cat.” (Wittgensteinian Themes s. 71)

Faarlund er enig i at mennesker, som andre dyr, kommuniserer ved hjelp av andre midler enn språk -- “ved gestar, ansiktsuttrykk, kroppsspråk, stemmeleie og så vidare”: “Det meste av ikkje-språklege kommunikasjonsmiddel har vi sams med andre apar.” Dette perspektivet er uomgjengelig for den som vil forstå språkbruk og menneskelig kommunikasjon, men for å forstå hva som “set menneska i stand til å produsere og forstå ytringar” og “kva slags fenomen menneskespråket er”, behøver vi det indre perspektivet.

Når ord og setninger vokser spontant frem hos spedbarn, men ikke hos katter og sjimpanser, kan nok hjerneforskjeller ha noe med saken å gjøre, men hypotesen om at språk primært er utviklet for tanken og sekundært er til for å snakke med forklarer ingenting. Teorien om en medfødt språkforståelse, som finnes (i folks hjerne) før og uavhengig av både språklig og ikke-språklig kommunikasjon med andre, er en misforståelse. Tankesettet overser hvordan språket og språkforståelsen vokser frem, hvordan barnet gradvis lærer seg å bruke språket, hvordan språket gradvis omformer barnets liv og barnets samliv med andre, samtidig som det er akkurat dét teorien ønsker å forklare.

"Tydinga [av orda] tillet så mykje tvitydigheit at språket blir lite eigna til kommunikasjon mellom menneske," påstår Peter Svenonius -- som om våre tanker var entydige så lenge de bare var tanker i hodet, men at de ble uklare straks vi forsøkte å sette ord på dem. Jeg avviser slett ikke at det av og til oppleves på denne måten; at vi gang på gang forsøker å forklare et eller annet, bare for å bli misforstått. Ordene tolkes ikke slik de er ment. Men kommunikasjon oppleves ikke alltid slik. Hvis vi forsøker å se vekk fra all “ikkje-språkleg kommunikasjon” i en samtale, vil vi ofte ha vanskelig for å vite om ordene er ironisk eller oppriktig ment, om den som snakker er fornøyd eller sint -- “kanskje er det,” som Faarlund skriver, “derfor det så lett blir oppheita diskusjon på sosiale medier som Facebook, der vi ikkje ser den vi diskuterer med” -- men all samtale foregår jo ikke på Facebook. Vi skiller ikke kroppsspråket, stemmeleiet, ansiktsuttrykket, kort sagt: måten ordene brukes på, fra ordenes betydning på denne måten.

Videre: Hva vil det si at mine tanker er entydige for meg? At jeg alltid vet hva jeg mener når jeg snakker! Men hva betyr dét? Betyr det at jeg alltid kan svare når det stilles spørsmål ved min ordbruk? Faarlund beskriver det indre perspektivet i språkvitenskapen som en interesse for “språket som ein mental gjenstand, som set oss i stand til å bruke språket”. Språket er altså det som setter oss i stand til å bruke språket? Eller er det sånn at når vi bruker språket, så er det en mental gjenstand vi bruker? Muligens er denne tanken flertydig bare i kommunikasjonen, men klar og tydelig -- i den betydningen at ingen kritiske spørsmål kan sette ham fast -- for Faarlund selv; men både Faarlund og Svenonius må da ha opplevd å uttrykke sine innerste tanker, bare for å motta spørsmål om de mente sånn eller sånn, uten selv å vite svaret. Det er en nokså hverdagslig erfaring at våre tanker avsløres som utydelige, og at kommunikasjonen bidrar til å gjøre våre tanker tydeligere også for oss selv.

torsdag 20. desember 2012

Awful yes, but, in the long run, worth it.

I should perhaps apologise. This turned out to be a terribly long post. I guess I could learn something from other writers telling me to kill my darlings. But I am convinced that one should never kill anyone, not even when that means a less-than-perfect result. So, there you are.


***

Great people are often lousy persons. The makers of history reach the top by climbing all over lesser and more polite individuals. Or as the Norwegian humorist Odd Børretzen once put it: While the rest of us quietly await our turn, the Napoleons of history trample in, with muddy boots on, knocking over chairs and tables and demand being served coffee and cake. But even though they cut in line and leave nothing but crumbles for us, we admire their accomplishments.

Moody, rude, paranoid, self-righteous and vindictive, Isaac Newton could have gone down in history as a minor villain. Instead this nasty piece of work is held in high esteem. His scientific genius and important discoveries obviously distinguish him from your common thug. But should he therefore be held to a different standard? Do people who accomplish things deserve to be given some moral slack?

Thomas Hurka thinks so. Driven individuals might treat other people less than gently; but if their single-minded preoccupation yields good results, then we can surely forgive a little rudeness. Who would argue that Newton’s law of gravity, in the long run, was not worth some sore toes? What he lacked in politeness, he made up for with greatness.

Hurka argues as if history will excuse any questionable means by which great goals are reached. In hindsight, it might look as if this is what history does. Today, no-one resent Caravaggio (as many did in the 17th century) for his petty behaviour. Today, most art loving people simply admire him (as few did back then) for his paintings. But have we really excused the former because of the latter? For reasons I will return to, I think not. (If history somehow had decided that the paintings in the long run are more important than the not-so-good things that made them possible, then history must, somehow, have compared the two things, and, somehow, have found that the good consequences outweigh the not-so-good means. But how on earth could such a comparison ever be made? What has happened, I think, is rather that the means simply have vanished from our field of vision. History has rather forgotten than forgiven, I believe.) But let me first point out some of the more obvious problems with Hurka's reasoning.

Hurka is careful not to allow geniuses to do anything just because they are geniuses. They cannot get away with trampling on people for no good reason. Trampling on others is justified only if it somehow contributes to their artistic or scientific excellence, or if their unsociable behaviour is a necessary side effect of the dedication they need to give their art or science in order to achieve such excellence.

Perhaps, Hurka suggests, the dedication needed just isn’t compatible with being too concerned with other people (therefore, in order for great things to keep happening, we might need to give these people some slack). Perhaps, indeed. This is an empirical hypothesis. But however are we to test its truth-value? There might be some connection between dedication and unsociable behaviour -- and there might be none. Or perhaps there is such a connection in some cases, but not in others. And how do we distinguish the faux pas that somehow did contribute to Caravaggio's and Newton’s accomplishments from their inexcusable misdemeanours? How do we decide which were necessary and which were not? This distinction is crucial to Hurka’s argument, but I see no practical application of it.

Another problem is this: An artist might behave like a jerk, but if this somehow contributes to, say, his revolutionising the history of modern art, then his nastiness is, as it were, compensated for. But if he does not accomplish anything great, then his nastiness is not compensated for, and he simply is a jerk. So what should we do? Just wait for the end result (or the historical verdict) before making up our minds about the actions of ambitious people? If so, we might have to sit back for a loooong time: Rembrandt was regarded as a merely skilled painter until some hundred years after his death.

And what are we to say to aspiring artists? Should we encourage talented people to behave like jerks, in the hope that this will eventually enable them to do great work? Wouldn’t that inevitably entail encouraging a lot of people to behave like jerks who never will accomplish much of anything, because it simply is impossible to tell in advance who will produce something invaluable

Apart from these (we might say) practical objections, there are philosophical ones too. This one, for instance: Can history ever forgive anything? (This is not the trivial point that History is not a real agent; I am suggesting that it never is up to the whims of history to decide what is forgivable or not.)

During the interview, a listener raised a pertinent question. She mentioned a long series of artists who have benefited artistically from terrible afflictions. She wanted to know whether their works somehow justify the suffering they experienced? Here is Hurka's response:
Let's say someone like Mickey Rourke has to go through periods of suffering in order to become a better actor. I would say that that suffering was redeemed, if you want to use that word, by the fact that it led to something more valuable later.
How can he say something like that? It is not inconceivable that Rourke himself could end up viewing his own suffering as redeemed in this fashion, in which case Hurka's statement would be less problematic; but I see no way in which Hurka can decide that the suffering is so redeemed. Hurka seems to take for granted that Rourke’s periods of suffering would be redeemed if they led to something more valuable later. But is this really granted? What if Rourke himself thought otherwise? What if Rourke claimed that nothing could redeem his sufferings? Imagine that he, precisely because of the sufferings that eventually made him the greatest actor in the world, continued to curse the day he was born for the rest of his life? Is there any objective point of view from which we can say who’s right and who’s wrong on this matter? I don’t think there is a question of being right and wrong here at all. Mickey Rourke simply is the only person entitled to say what, if anything, could possibly redeem his painful experiences. Try, if you may, to imagine Hurka telling Rourke on his deathbed that, never mind Rourke’s own opinion, his periods of deep misery had in fact been redeemed by his periods of great acting. How would that be received -- as a comfort?

But, say that Rourke was happy to say that his misery was redeemed by his later career. His wife, let us imagine, is less forgiving. She keeps complaining about the anguish he caused her when he was being a bad guy making himself that great actor. What could Rourke say in his defense to get morally excused by her
If he really was the greatest, then it would still be a violation of her rights, but it can be justified in the long run by what it made possible….
Again the objections are obvious. Is really Hurka in a position to forgive a husband his violation of his wife’s rights? Or is his point rather that the wife should forgive her husband on these grounds? If so, how is Hurka in a position to make that judgement? "Only those who suffer the wrongdoings of others are entitled to forgive," Dostoyevsky wrote in The Brothers Karamazov. That is what forgiveness means. There is no non-afflicted or objective point of view from which we can assess whether an action is forgivable or not. Only the victims are entitled to judge whether damage done by a some artist should be forgiven because of the beautiful art the wrongdoings made possible.

Consider the infamous case of Paul Gauguin abandoning wife and children in order to go to Tahiti to paint. Upon hearing about this some might say “What a terrible ting to do,” and think it would have been better had he stayed at home, even though his negligence proved to be decisive for the history of modern art. I understand, to some extent, why some would think so; but in the end I do not share their sentiment. What I say is closer to: “What a terrible thing to do, but thank God! Had he not gone, he never would have made those wonderful paintings!”

It is not necessarily insensitive to say that, I think. (Is it not rather parallel to being happy for the knowledge we have of certain medicines, but at the same time being horrified by the way that knowledge was obtained? “How could anyone do such a thing,” we say when hearing about certain medical experiments from the past, “those people should have been prosecuted”; but still we have few scruples taking advantage of the knowledge when we do have it.)

We are not a contemporaries of Gauguin, and I think that is an important difference. We are talking about incidents from a distant past. We never will meet his family, and are unlikely to offend anyone by concentrating on the man's art. In fact, I guess, we are more likely to anger his contemporary descendants by doing anything else. But, of course, it could have been distasteful if we, back then, had encouraged Gauguin to abandon his family for his career (Gauguin was a nasty piece of work, so for all I know his family could have been happy to see the back of him). And I believe that Mme. Gauguin would have been justifiably offended, if her heartache had been ignored just like that. Imagine that Mme. Gauguin confided to me, pouring her heart out in despair, and I couldn't stop praising her husband's talents...! Or if I, paraphrasing Hurka, had replied something like this:
"Yes, he was a lousy husband; yes, he trampled all over you and your children; yes, he violated your rights and your marriage -- but we must take care not to end up as moralistic monomaniacs. There are other kinds of value too, you know. Look at your husband's whole life -- not just his meanness, but also the vast number of great works he went on to produce. When you realise that your suffering was necessary to make that contribution to the History of Mankind possible, we can both agree that we shouldn't make such a big deal outta that, don't you think?"

torsdag 22. november 2012

Procrastination.

Here's a list of some things I've been up to lately, when I, in all honesty, ought to have done other things.

1) I have been engaged in a short exchange of opinions about the usefulness of philosophical biographies, or rather biographies of philosophers, mainly concerning Ludwig Wittgenstein, with Duncan Richter.

2) By way of this interview with Ken Taylor and John Perry (hosts of PhilosophyTalk) on "The Uses of Philosophy", I discovered Entitled Opinions a couple of weeks ago -- a podcast of which I have grown fond. It is hosted by Robert Harrison, a professor in Italian literature at Stanford University. Over the years, the programs have covered a wide range of topics "about Life and Literature". The archive now contains more than 140 shows. Evidently, I haven't listened to all of them, but have had a few great moments. One of my favourite episodes so far, is this interview with Joshua Landy about Marcel Proust, which provoked me to philosophise about voluntary and involuntary memories -- a topic I may (or not) blog about in the near future. And I have seldom, if ever, heard a deeper and more thoroughgoing discussion (much of which went over my head) of any literary topic on public radio than this discussion of Moby Dick. Yesterday I began reading one of Harrison's books too. Now, I am half way through his book on Gardens, which, among other things, has a beautiful chapter on the similarity between gardening and tutoring, Plato's academy, and the importance of discussion in education. I also look forward to his earlier book on Forests. So far I have only leafed through it, but it looks like mandatory reading for anyone, like me, who struggles to grasp our conception of human-nature relationships.

3) Speaking of which... Yesterday, while going through some old notes of mine on nature, this unrelated passage on pains and phantom pains appeared. It seems worth sharing. I am commenting on the following proposition by the materialist philosopher D.M. Armstrong:
We say that we have a pain in the hand. The SENSATION of pain can hardly be in the hand, for sensations are in minds and hands is not par of the mind. 
I am quoting from Consciousness and Causality here. According to my notes the passage should be on page 105, but when I tried to look it up, I was unable to retrieve it. Anyway, my reaction was as follows:
Imagine that I went to the doctor with pains in one hand, and the doctor replied that it was all in my head. I would be surprised, if not offended. Normally we distinguish between real and imagined pains, pains that are, as we say, in the hand and pains that are only in our heads. To say that the pain I feel in my hand really is located in my head, would in most circumstances suggest illusion or hypochondria.
What about phantom pains? Phantom pains are often brought up in this discussion. People can apparently experience pains in limbs they no longer have. Doesn't that suggest that sensations take place in the mind rather than in our limbs? The experienced pain certainly cannot be located in the hand because that location simply doesn't exist! The argument supposedly strengthen the view that all sensations, even experiences of pain in existing limbs, take place in the mind. But I doubt that arguing from phantom sensations can demonstrate that.
But think of the experience, the sensation of pain -- wouldn't the sensation be the same whether the hand exists or not? And if so, wouldn't that sensation have to be located the same place too? I am not convinced by that, because I am not convinced that the experiences will be identical in the first place. But this needs a little investigation. Examples might help. Imagine a person wanting her right arm experiences pain in her non-existent index finger. Wouldn't that experience be identical to the pain experiences in her left finger? Not necessarily. Say, if she lost that arm three years ago, and she has learned to live with only one arm and so on. Phantom pains certainly can be painful, but won't she experience them as phantom pains? I mean, how would she describe her experience? Would she say "My right index finger pains me", or would she rather say something like "oh no, not this again"? But let us modify the situation a little, so we can put these reservations out of play. Imagine a woman waking up from coma in a hospital. She has survived a terrible car accident. She experiences excruciating pains in both her arms. However, while she was unconscious, the doctors have amputated one of the arms. Could we confidently deny that the sensations would be identical in this case? Perhaps not. I, for one, doubt that the injured woman could, just by introspection, could tell that one of the pains were, in a sense, less real. Let us put more pressure on our commonsensical view. Let us imagine that this woman was, when the accident happened, on her way to the hospital with a badly injured arm -- incidentally the one now amputated -- and when she wakes up in the hospital, wouldn't she cry out something like "The pain is still there!"? 
However likely that is, does that in any way suggest that the pains were not in her hand even before the amputation?
Consider this case: A woman is about to have her arm amputated because of the excruciating pains it gives her. She watches as the knife removes the arm, but to her horror, the tormenting pains continue. What should we say about that? I am not sure. Saying that the pains somehow migrated from her arm to her mind (or whatever) at the moment of separation, or that the pains instantly jumped from reality to illusion, or that real pains with a stroke of magic were replaced by phantom ones, certainly looks unhelpful. Should something like this ever happen, I think we might find it reasonable to believe that phantom pains may somehow occur in existing limbs too, though such a concept would hardly be much in use, as amputating the limbs in question would be the only way to tell whether the pains were physical or mental. But to conclude that all bodily sensations are mental, as Armstrong seemingly does, wouldn't be helpful at all. First of all, the case we are now considering would be just one incident, and a rather extreme one at that. A deeper problem is that this conclusion is suicidal because it undermines the distinction between real and illusionary experiences altogether -- the very distinction that makes all talk of phantom pains possible in the first place -- the distinction which all arguments from phantom experiences rest on.
At this point my train of thoughts sort of wandered off, but I think these paragraphs contains some points worth pondering.

(Finally. While surfing on the internet -- as a way of not working on this post -- I just now came across this piece about philosophy and parenthood. The text seems like a possible topic for a future post. While I never would say that "one cannot fully realize one's potential as a philosopher unless one is a parent", I think being a parent might help you philosophise about, say, parenting and parenthood....)