onsdag 31. oktober 2012

Animal Kindness.

Mark Rowlands has recently published a book in which he defends the concept of animal morality. I haven't read it yet, but am curious about it, because Rowlands has also written an essay on the subject which seems problematic. The author acknowledges that most philosophers and most scientists, both past and present, would disagree with him, but he thinks their scepticism is ill-founded. To some extent I may agree with him on that view, but I am still a sceptic -- for slightly different reasons.
The scepticism of philosophers towards the idea that animals can behave morally is subtly different from that of scientists. Scientists question whether there is enough evidence to support the claim that animals can be motivated by emotions such as kindness or compassion, or by negative counterparts such as malice or cruelty. Philosophers argue that, even if animals were to be motivated by these sorts of states, this is still not moral motivation. When they occur in animals, these states are not moral ones. For example, compassion, when it occurs in an animal, is not the same sort of thing as compassion when it occurs in a human. When it occurs in an animal, compassion has no moral status, and so even if the animal acts through compassion, it is still not acting morally. 
Rowlands lists a number of anecdotes and stories about animal behaviour to strengthen his case, including the famous incident in Brookfield Zoo where a toddler climbed the fence and fell five meter onto the concrete floor of the gorilla enclosure. Horror-struck, spectators could only watch as a full-grown gorilla approached the injured boy. Then the unexpected happened. “Binti Jua lifted the unconscious boy, gently cradled him in her arms, and growled warnings at other gorillas that tried to get close. Then, while her own infant clung to her back, she carried the boy to the zoo staff waiting at an access gate.” Some commentators were less than impressed by Binti Jua’s good deed, claiming that it wasn't a good deed at all, she had simply mistaken the unconscious boy for one of her stuffed toys. I agree with Rowlands that that sounds ludicrous. Moved by the gentle compassion and concern Binti Jua showed for the little boy, Rowlands sees no reason for doubting that her rescue act was motivated by genuine empathy. Neither do I. Still, it seems philosophically problematic to describe her behaviour in moral terms.

Philosophers, according to Rowlands, are often guilty of overemphasising the role of rationality in morality, sometimes making it a sine qua non, as when Immanuel Kant denied that actions motivated by sentiments or feelings instead of duties and principles could ever be described in moral terms. Because morality, traditionally speaking, has been so tied up with the notion of responsibility, most philosophers have been united in their reasons for thinking that animal behaviour cannot be judged by moral measures. To be morally responsible for one's actions requires an ability that animals lack, namely the ability to scrutinise one's motivations critically. It is not simply that a dog or a gorilla, as it happens, never engage in this sort of self-scrutiny. “What is crucial is that it cannot do this -- it does not have the ability to scrutinise its motivations.” That is why, according to the philosophical tradition, humans, and humans alone, are capable of acting morally.

But, Rowlands asks, isn't it “possible to do things that we ‘ought’ to do, even in the absence of critical scrutiny or rationalisation about alternative courses of action”? Plainly, the answer is yes. Likewise, he argues, animals can be motivated by some desire to do good (or bad) things. “A dog,” writes Rowlands, “can be motivated by the desire to rescue his companion, and rescuing his companion is a good thing.” All of this seems all right to me. However, I am not sure this “opens up a new way of thinking about the moral capacities of animals” -- or, to put it another way: Whether we should describe animal behaviour in moral terms or not, isn’t simply a question about their capabilities. Of course, it is related to the question of which feelings, emotions and motivations animals are capable of having -- but the question whether we should regard animal behaviour in moral terms is related to a number of other questions too -- questions having to do with our attitude and relation to these animals: for example what feelings, emotions, motivations and actions we are entitled to expect or demand from animals; how we should understand, judge and react to their behaviour, say, if they fail to live up to expectations, and so on.

The philosophical problem, as I see it, can perhaps be concentrated in this question: Can our moral language, with all its fine-grained and critical distinctions, be used at full stretch when talking about animal kindness?
[In 1964] Stanley Wechkin and colleagues at the Northwestern University in Chicago demonstrated that hungry rhesus monkeys refused to pull a chain that delivered them food if doing so gave a painful shock to another monkey. One monkey persisted in this refusal for 12 days.
Now, I am moved by such accounts. I think I can, to some extent, understand this monkey. I can, I think, understand what motivated its behaviour. I do not hesitate to describe its self-sacrifice as a compassionate response to the painful cries of its fellow monkey. I may even call this an instance of proto-morality, as people sometimes do. "Proto-morality, apart from suggesting a story of where our own so-called moral sentiments evolved from, would simply underline the fact that I find the monkey's persistent refusal of food admirable. If I thought the monkey's failure to pull on the food chain might be explained not in terms of concern for its fellow monkey but, say, as an aversion to the noise made by a rhesus monkey when it receives an electric shock, then my attitude would be quite different. I would take a stance resembling that of those who tried to explain away Binti Jua's kindness as a simple mistake. But I don't see any reason for taking that stance. Instead, with phrases like "self-sacrifice" and "proto-morality" I am placing the monkey's behaviour in the vicinity of a human hunger strike. "Proto-morality" would suggest that the monkey was indeed motivated by compassion, concern or empathy, that it was an exercise of great will-power to keep it up for twelve days -- and also, "proto-morality" would suggest that this monkey's behaviour, as with similar human behaviour, is something we can regard with genuine admiration.

If this is the attitude Rowlands wants convey by calling the such behaviour morality proper, then, as I say, I see no deep problem here. However, Rowlands seems to suggest something more -- and then all the awkward questions (not just about responsibility) that he tries so hard not to invite when talking about animal morality come marching in.

In what sense can I morally admire the rhesus monkey's behaviour? I am sure rhesus monkeys are creatures capable of feeling pity and empathy, and I am equally sure that this particular monkey stayed off food, despite of its growing hunger, because of feelings like these -- but I am not at all sure what it could possibly mean to say that the monkey did the (morally) right thing, or that empathy was the only appropriate emotion in its situation. Calling something right seems to imply that the opposite must be wrong. Does Rowlands think it would have been wrong for the monkey to pull on the chain to get the food? If so, in what sense? In the sense that condemnation would have been called for? A dog may, as Rowlands writes, be "motivated by the desire to rescue his companion, and rescuing his companion is a good thing," but would Rowlands also say that a dog ought to desire to do good?  Could his companion justifiably expect help; and if help didn't come, should the other dog regard him as a lousy friend? A dog who runs away with the tail between its legs may be deemed a coward. Such a dog may be a disappointment to its owner. It may be unfit for the tasks he had hoped. A coward will, for instance, make a bad police dog. But in the morally pertinent use of "coward" there is a sense of condemnation. Does Rowlands think condemnation would be appropriate in such a case? Rowlands is justifiably moved by the story of Binti Jua. Calling her action good, seems alright to me, but morally good seems to imply that the other gorillas, who, seemingly, didn't lift one finger for the injured boy, were morally on the wrong side. If a human being simply had ignored an injured child, bystanders would certainly have reacted with anger. But would it have enraged us if Binti Jua had simply turned away? Or what if she had turned violent on the boy instead -- that would, undoubtedly, have horrified us, but would it have been a moral horror?

Asking these questions is partly what I mean by using moral language at full stretch.

I am, as I have said, genuinely moved and amazed by all the stories of exceptional animal kindness that Rowlands recounts. That, I think, is, partly at least, the effect Rowlands wants to have on his reader. He wants us to see animal behaviour in a new light, not simply "nature red in tooth and claw" but as a place where the most wondrous acts of goodness can take place too. However, calling them acts of animal morality, is more likely to cause the reader to see problems with his interpretation (as I have just done) rather than contemplate the kindness he makes us see.

What alternatives are there? We sometimes call exceptionally good deeds beautiful. By exceptional, I mean actions that supersede moral expectations. Saintly deeds are typically beautiful in this sense. Perhaps we could frame Binti Jua's gentle concern for the poor boy this way? Not modelling it on a conception of morality, but on somethings that supersedes moral demands? Describing animal kindness as beautiful rather than moral would do justice to the kindness we see, without disregarding our surprise at such kindness. -- This surprise, I believe, do tell us something about the possibility of animal morality. At the outset of this post, I wrote that "something unexpected" happened when Binti Jua took care of the boy the way she did. Had she been a human being, we wouldn't have thought much of it. That would have been the only appropriate thing to do, after all. But when a gorilla does it...! Binti Jua's gentleness took people by surprise. Doesn't this difference in expectations tell us something about humans and animals?

tirsdag 23. oktober 2012

Plast og syntetisk mat.

Flere kvelder på rad har jeg lest Loraxen for Sigrid. Hun er svært begeistret for andre bøker av Dr. Seuss, så det forbauser meg ikke at hun liker også denne. Forleden dag bestemte vi oss for å se filmen.


Lorax -- Skogens Vokter er ingen storfilm, men ok -- ikke minst har den gode intensjoner og tar opp viktige temaer. Filmen er kun løselig basert på boken. Historien om hvordan gründeren Selv-nok trosset Loraxens advarsler og raserte naturen for pengenes skyld, utgjør nå bare en del av bakteppet. Handlingen har så å si flyttet ut i rammehistorien. Jeg er ambivalent til dette grepet. På den ene siden forsvinner Dr. Seuss’ nådeløse raljering med kapitalistisk griskhet, industriell sløseri og det meningsløse jaget etter mer, mer, mer. På den andre siden hadde neppe en trofast adapsjon av boken blitt en såpass god film. Filmmakerne har forsøkt å ta med seg alvoret fra boken, men mangler mye på elegansen. Den nye intrigen er ikke særlig oppfinnsom. Historien om tolvåringen Ted som forsøker å imponere drømmejenta ved å engasjere seg i hennes hjertesak (i dette tilfellet miljøsaken) og havner i trøbbel har vi sett utallige ganger før. På den andre siden er det kanskje et pluss at filmen til forskjell fra boken har mennesker vi kan identifisere oss med i hovedrollene? Skjønt menneske og menneske, fru Blom…. Én ting slo meg da jeg så filmen. Lorax -- Skogens Vokter maner til de grader frem en skrekkvisjon om en artifisiell verden der allting er juggel og laget av plast og elektronikk. Samtidig er filmen helt og holdent dataanimert. Er ikke det ironisk?

Jeg innser at min anbefaling virker nokså tilbakeholden. Alt er "på den ene og på den andre siden". Jeg greier ikke å gjøre meg opp en klar mening. Filmen var slett ikke dårlig, men heller ikke toppers. Jeg har forsøkt å trille terning, men den lander aldri på et tall jeg kan underskrive på. Kanskje dere burde lytte til fireåringen min i stedet? Sigrids dom var krystallklar. Hun syntes filmen var "kjempe bra".

torsdag 4. oktober 2012

Talking (about) robots.

Trying to imagine a talking robot, the question “why listen?” always seems to pop up. Not because I doubt that lending a robot my ear may provide useful information in some cases. Nor because listening to a robot with nothing but factual information to offer soon will become boring, for I can easily imagine a robot passing the Turing test, which, essentially, is a way to test the robot’s conversation skills. If a robot can fool a human interlocutor into believing that he is talking to another human being, then this robot is said to possess artificial intelligence. Listening to a well-spoken robot like this certainly could be entertaining for a while, I am not denying that either, but entertainment isn’t always what we look for in a conversation. What I question, as I have said before (here and here), is whether an automaton, no matter how sophisticated, could ever tell us anything. In what sense can a robot ever be said to "know" something? Can a robot ever carry a burden or a secret it just needs to get off its chest, or wouldn't reveal even if life depended on it? Can a robot ever be said to express anything? A sentiment, an opinion -- anything -- that could move us or provoke us to react, with anger or pity, say? Could an automaton ever be called wise, or deep, or profound, or perhaps shallow, boring, childish, stubborn or pig-headed?

These are some of the questions underlying my broader “why listen?”-question. Perhaps I could rephrase my concerns with much AI philosophy and the Turing test thus: Would a positive result on a Turing test prove that robots are becoming more human-like? The answer, obviously, depends on what we mean by “human-like”. Such a result would certainly prove that robots could be epistemologically hard to distinguish from human beings in certain respects and in certain contexts. But that conclusion is rather trivial, philosophically speaking. My impression is that most AI engineers are philosophically more ambitious on behalf of their field. The driving force behind much of the research, it seems to me, is the dream of creating robots so sophisticated that we will feel forced to relate to them in a human-like fashion too.

As computer and robot technology evolves, automatons are getting better and better at mimicking human behaviour, and some day this may result in an entirely new situation: Some day we may be quite unable to tell the difference any more. What then? This question underlies much AI philosophy.

It may be true that technology is pushing in that direction. Some day robots may be so sophisticated that even the hardest Turing test wouldn’t unveil them. I am not questioning this (though I do find some of the wilder scenarios...well...wild, and don't think this is likely to become a reality any day soon). My objection, rather, is that even if it should become impossible for us to tell the difference between a human-to-human and a human-to-robot intercourse, it is far from evident that we also should stop seeing a difference here. That’s obscure; so let me put it like this: Empirically speaking we may be unable to tell whether we are talking to an automaton or another human being, but that doesn’t mean we would altogether stop distinguishing between, say, artificial and genuine conversations. That may happen -- it is a possible scenario -- but it doesn’t automatically follow. The distinction between the false and the real, the artificial and the genuine would undoubtedly play a very different role in peoples lives under such circumstances, but it isn’t self-evident that it would play no role at all.

That is the impression I sometimes get from reading arguments about the Turing test. The test is supposed to settle how we should view human-computer interaction. As long as we can see the difference, there really is an important difference here; but should the differences become invisible to us, then many of the distinctions we make between man and machine simply would evaporate. If a computer can fool us into treating it as an intelligent creature because we think it is an intelligent creature, then this truly proves computer intelligence, and we should perhaps stop talking about having been fooled altogether, and instead accept the computer as an intelligent interlocutor and regard our conversation with it as a genuine exchange of ideas. But this seems confused to me. This question cannot be determined by the Turing or any other empirical test. Whether computers are becoming more human-like in this sense, isn't an empirical question at all. How we relate to automatons is largely a normative question, not simply a question about computer sophistication. If this were true, a positive result on a Turing test would oblige us to engage with this computer in similar manners next time too, even though we now know that it is (only) a computer. That sounds like an odd obligation to me.

Let us imagine a Turing test situation turning into an argument. Say I present Ingmar Bergman as my all time favourite movie director, whereas my interlocutor prefers Woody Allen. Under normal circumstances I wouldn’t hesitate to call this a disagreement. But if my interlocutor were a computer I would find that description...awkward, if nothing else. I am not saying that I would have a feeling of awkwardness while arguing (this computer is too sophisticated to give itself away like this). What I am saying, though, is that the idea of disagreeing or arguing with a computer sounds strange to me, the complexity of the computer notwithstanding. (If you prefer Allen to Bergman, I may disagree with you. Perhaps it would be hard for me to let go; I may brood on our quarrel days on end; think about arguments I used or failed to use; I may bring the issue up again next time we meet, and so on; but could I disagree with a computer in any way resembling this?) Imagine that you were looking over my shoulder knowing everything about my interlocutor. Perhaps you would describe my behaviour as “disagreeing or arguing with a computer,” but wouldn’t you also describe my anger and frustration as somewhat amusing and misplaced? I think I would if I were in your shoes -- or perhaps I would feel a little sorry for this person struggling to talk sense to a computer. And when the truth finally was revealed to me, wouldn’t I feel a little foolish? I would perhaps congratulate the manufacturers of the computer on their great accomplishment, but the feeling of having been fooled would hardly go away. In one sense I think it shouldn't go away, either. What sense is that? In the sense that I would be making a fool of myself were I to continue the argument knowing my interlocutor to be a computer.