onsdag 5. september 2012

Opinions on Artificial Personalities.

In a recent blog entry I said, following Lars Hertzberg, that disagreements about whether ”real” AI is ever possible are not disagreements about technology possibilities, but disagreements about what we feel like calling such technologies. These are to a fair extent disagreements about verbal preferences. What words we will prefer when describing unheard-of technologies in the future, no-one can say with any confidence. As none of us knows beforehand what we will feel like saying about a computer that’s able to, say, engage us in long and sophisticated conversations -- whether we will call it intelligent or not, or whether we will call it so at one time and later change our minds about it -- asking these kinds of questions are rather pointless. This, however, doesn’t render the entire AI-field philosophically barren: It could still help us shed some light and make us think harder on important questions like, What is it to be a person? To have a personality? What does it mean to be an individual? And what does it mean for someone to have something to say?

In this follow-up blog post I’ll elaborate a little on these ideas and try to demonstrate what such an investigation might look like.

Now, when I asked why on earth we should care to listen when or if robots learn talk I wasn’t denying that this could provide us with useful information. Even to day online search engines do that: If I want the name of the capital of Belarus, Google will instantly give me the right answer. A computer may be knowledgeable in the sense that it would be quite unbeatable in quiz-shows, but can a computer reflect on and have its own opinion about the things it knows? It was along these lines it thought when I questioned that computers could ever have something to tell us. The answer, I guess, depends on whatever you think reflection and opinion is. But let me explain why I am hesitant.

Generally we assume several kinds of connections between a person and his opinions. When someone speaks his mind on some issue -- on the roots of the Middle-East conflict, say -- we take that as expressing something about that person. I am not primarily thinking of “Oh, so you are one of those, are you?”, though that reaction is common enough. No. 1) My thought is, rather, that no opinion can stand alone in a man’s thinking. Anyone can say “Israel is to blame”, but if this person has no opinion about the international power politics involved in the conflict, doesn’t know the first thing about its history, or have any idea of what a reasonable solution might look like, and so on, this person can hardly be said to have the opinion that Israel is to blame. To have an opinion, is to have a cluster of them. 2) But that cluster cannot look any old way. A person with this view on the origin of the Israel-Palestine conflict cannot have any idea of what a solution might look like. If he thinks the Israeli unlawfully seized the Palestine, we would be excused to think he was confused if he at the same time thought Israel should be allowed to carry on its rampage. “What do you mean!? You can’t have it both ways. Do you think Israel is the perpetrator here, or do you think Israel has a legitimate cause to continue fighting?” Charged with incoherency a person has only two options: either he must convince us that his line of thinking isn’t really incoherent, or he must accept this and revise some of his statements so that it all fits together -- at least he must if he wants to make up his mind on the subject. This is sometimes called the principle of reflective equilibrium. 3) This ties in with yet another connection between a person and his thoughts, namely that of accountability. We hold persons accountable for their thinking. If a line of thought is confused, incoherent, wrongheaded or messy we think of this as his mess. I cannot clean it up for him. I can of course try to show him how, but in the end this is something he has to fix himself. (The same, of course, is true for a brilliant argument: it’s his brilliance.) If someone utters an opinion by which we take offence, it isn’t simply the opinion, but the person whose opinion it is that offends us. This might be vaguely put, but the point is simply that it is to him we take our complaints. 4) Finally there are of course the famous “oh, so you’re one of those”-reactions. A brave action may change my whole perception of you. “He is one of the brave ones, the ones who don’t panic, but whose nature it is to risk their own lives for the sake of others.” The underlying thought (if it is indeed so much as a thought) is that our lives reflect who we are as persons. This is why actions, value judgements and opinions can forge and break apart friendships, for instance. If your friend says something deeply racist, for example, I can easily understand why you would refuse to spend any more time with that person. But if our opinions on such matters said no more about us than, say, our mood (which changes all the time) or the colour of our hairs, such reactions would be rash, if not entirely senseless. (Few think of them as such.)

It is normative connections like these, between persons and personalities, on the one hand, and opinions and attitudes, on the other, that makes me question whether computers ever can be said to take a stance and have their own views on things. As I said at the outset (and explained in that earlier post of mine), this is not I being pessimistic about how sophisticated computer technology is likely to become. Computers and robots may well be able to perform all kinds of complicated tasks in the future. "Will computers ever be able to pass the Turing test? Can robots ever replace dogs as assistants for blind people in traffic? Will robots some day do the dangerous work of fire-fighters?" Time will show. These are empirical questions that possibly will have their answer in the future. But the question “Will computers ever will be intelligent and thinking creatures?” is of a different sort. Whether true AI is possible cannot be settled by testing what computers are able to do, as the question is whether or not we think "(artificial) intelligence" is a fitting description of such accomplishments.

How about artificial personality? The question, again, is not whether robots can be programmed with character traits of a sort -- which seems trivial, certainly possible. The Swedish TV-series Äkta människor (real people) envisions a future where people go the supermarket and choose from a large selection of different types of Hubots (HUman/roBOT). As I remember it, the shopkeepers in the series used to call this "(artificial) personalities", as in: "What 'personality' would you prefer?" This seemed O.K. to me. But would it make sense to treat these Hubots as personalities (without the scare quotes) in the largely normative sense I have sketched out above? What would this entail? Consider the following questions.


What would it, for instance, be to demand that a Hubot aspired to achieve reflective equilibrium? “Why, the demand would be exactly the same as for people, of course!” Does this imply that it also would reflect badly on the Hubot if it failed to do so? That we should think of it as a failure, and that we, perhaps, ought to turn our backs on this Hubot? A Hubot rescues a baby from a burning house. Is this courage? A Hubot fails to speak up against terrible injustice. Does this display cowardice or lack of backbone? A Hubot comments unfavourably on the colour of your skin. Is this racism? Could this remark jeopardise your entire relationship with this machine, as signs of moral corruption can ruin a human friendship?

I can understand that someone would be frustrated or angered if their Hubot started acting up like this (I vividly remember the infuriating syntax error‘s from my childhood), but what if they interpreted this "racist remark" as signs of hostility from the Hubot, that the Hubot somehow had let them down; or that the Hubot had taken a wrong turn in life sending it down the path towards moral corruption? Would I understand that? I don't know. Even in the TV-show this wasn't obvious. The views about the Hubots were divided, to say the least. Some regarded them as little more than sophisticated machines. Others hated them like vermin, and expressed this attitude in a language echoing that of Ku Klux Klan, indicating that the Hubots were like them in some important sense. Others again grew emotionally attached to their Hubots -- not in the sense people can grow attached to a certain pair of shoes, say, but something bordering on human attachments. Of course I cannot know what I would say if I were in their shoes. Maybe I'm short on imagination, but I must admit that it looks ludicrous to me to be offended -- or for that matter, flattered or pleased -- by a remark from a robot, however emotionally attached you are to it. To me, this looks like emotions gone wild (like reading a book and being wounded when finding the sentence: "You are a moron!"). I imagine that my advice to someone whose Hubot started to act up, calling them names, talking back etc., would not be to retaliate, or to keep the Hubot at some distance, or to think twice about turning it on next time. More likely, I imagine, my advice would be to look for the reset-button and, if that didn’t work, to reinstall the software.

I realise that I haven't made much progress with this post. I hardly touched on what it is to be an individual and believe there's much more to say about what it means for someone to have something to say. I may have to write that second follow-up. A follow-up to this follow-up, that is.

Ingen kommentarer:

Legg inn en kommentar