Trying to imagine a talking robot, the question “why listen?” always seems to pop up. Not because I doubt that lending a robot my ear may provide useful information in some cases. Nor because listening to a robot with nothing but factual information to offer soon will become boring, for I can easily imagine a robot passing the Turing test, which, essentially, is a way to test the robot’s conversation skills. If a robot can fool a human interlocutor into believing that he is talking to another human being, then this robot is said to possess artificial intelligence. Listening to a well-spoken robot like this certainly could be entertaining for a while, I am not denying that either, but entertainment isn’t always what we look for in a conversation. What I question, as I have said before (here and here), is whether an automaton, no matter how sophisticated, could ever tell us anything. In what sense can a robot ever be said to "know" something? Can a robot ever carry a burden or a secret it just needs to get off its chest, or wouldn't reveal even if life depended on it? Can a robot ever be said to express anything? A sentiment, an opinion -- anything -- that could move us or provoke us to react, with anger or pity, say? Could an automaton ever be called wise, or deep, or profound, or perhaps shallow, boring, childish, stubborn or pig-headed?
These are some of the questions underlying my broader “why listen?”-question. Perhaps I could rephrase my concerns with much AI philosophy and the Turing test thus: Would a positive result on a Turing test prove that robots are becoming more human-like? The answer, obviously, depends on what we mean by “human-like”. Such a result would certainly prove that robots could be epistemologically hard to distinguish from human beings in certain respects and in certain contexts. But that conclusion is rather trivial, philosophically speaking. My impression is that most AI engineers are philosophically more ambitious on behalf of their field. The driving force behind much of the research, it seems to me, is the dream of creating robots so sophisticated that we will feel forced to relate to them in a human-like fashion too.
As computer and robot technology evolves, automatons are getting better and better at mimicking human behaviour, and some day this may result in an entirely new situation: Some day we may be quite unable to tell the difference any more. What then? This question underlies much AI philosophy.
It may be true that technology is pushing in that direction. Some day robots may be so sophisticated that even the hardest Turing test wouldn’t unveil them. I am not questioning this (though I do find some of the wilder scenarios...well...wild, and don't think this is likely to become a reality any day soon). My objection, rather, is that even if it should become impossible for us to tell the difference between a human-to-human and a human-to-robot intercourse, it is far from evident that we also should stop seeing a difference here. That’s obscure; so let me put it like this: Empirically speaking we may be unable to tell whether we are talking to an automaton or another human being, but that doesn’t mean we would altogether stop distinguishing between, say, artificial and genuine conversations. That may happen -- it is a possible scenario -- but it doesn’t automatically follow. The distinction between the false and the real, the artificial and the genuine would undoubtedly play a very different role in peoples lives under such circumstances, but it isn’t self-evident that it would play no role at all.
That is the impression I sometimes get from reading arguments about the Turing test. The test is supposed to settle how we should view human-computer interaction. As long as we can see the difference, there really is an important difference here; but should the differences become invisible to us, then many of the distinctions we make between man and machine simply would evaporate. If a computer can fool us into treating it as an intelligent creature because we think it is an intelligent creature, then this truly proves computer intelligence, and we should perhaps stop talking about having been fooled altogether, and instead accept the computer as an intelligent interlocutor and regard our conversation with it as a genuine exchange of ideas. But this seems confused to me. This question cannot be determined by the Turing or any other empirical test. Whether computers are becoming more human-like in this sense, isn't an empirical question at all. How we relate to automatons is largely a normative question, not simply a question about computer sophistication. If this were true, a positive result on a Turing test would oblige us to engage with this computer in similar manners next time too, even though we now know that it is (only) a computer. That sounds like an odd obligation to me.
Let us imagine a Turing test situation turning into an argument. Say I present Ingmar Bergman as my all time favourite movie director, whereas my interlocutor prefers Woody Allen. Under normal circumstances I wouldn’t hesitate to call this a disagreement. But if my interlocutor were a computer I would find that description...awkward, if nothing else. I am not saying that I would have a feeling of awkwardness while arguing (this computer is too sophisticated to give itself away like this). What I am saying, though, is that the idea of disagreeing or arguing with a computer sounds strange to me, the complexity of the computer notwithstanding. (If you prefer Allen to Bergman, I may disagree with you. Perhaps it would be hard for me to let go; I may brood on our quarrel days on end; think about arguments I used or failed to use; I may bring the issue up again next time we meet, and so on; but could I disagree with a computer in any way resembling this?) Imagine that you were looking over my shoulder knowing everything about my interlocutor. Perhaps you would describe my behaviour as “disagreeing or arguing with a computer,” but wouldn’t you also describe my anger and frustration as somewhat amusing and misplaced? I think I would if I were in your shoes -- or perhaps I would feel a little sorry for this person struggling to talk sense to a computer. And when the truth finally was revealed to me, wouldn’t I feel a little foolish? I would perhaps congratulate the manufacturers of the computer on their great accomplishment, but the feeling of having been fooled would hardly go away. In one sense I think it shouldn't go away, either. What sense is that? In the sense that I would be making a fool of myself were I to continue the argument knowing my interlocutor to be a computer.
Ingen kommentarer:
Legg inn en kommentar