****
Nowadays self-regulating
air-condition systems are commonplace, and few persons feel like calling them
intelligent. But when such systems were first introduced to the market, people
were amazed by them, thinking they now could have intelligent, thinking homes.
Of course, these systems would never pass the Turing test: they never fooled
anyone into mistaking them for people. But sixty years ago no-one could imagine
a computer as sophisticated as Siri, so if someone from the 1950s were to
judge, we have indeed already passed that test.
This argument, and these examples, don’t settle much, I agree, but they
do, I think, point at something worth pointing at. As I read it, the gist of
the argument isn’t ”adding the requirement that we try to think as if we were
in the 1950s,” as Richter wrote, but rather that what we feel like calling artificial
intelligence changes with time, and with the world we live in. This goes to
show that the idea of ”true” AI isn’t as clear as many of its spokesmen seem to
believe.
People treat the question of whether AI is possible as if it were a question of what we can make computers do. But I sense deep confusion here. (In what follows I draw on thoughts by Lars Hertzberg.) There is indeed philosophical disagreement about whether computers can think or not, but this isn't really disagreement about what technology is possible or what the future might bring, but a disagreement about what we can say about technology whose actual or possible existence no-one doubts. “Can this computer think or can it not?” The answer to this question does not depend on what the computer is able do, but on what it makes sense for us to say about such performances. If I were fooled by a computer, I would no doubt treat it as I would a thinking agent, at least as long as I was under the impression that I was conversing a human being. Would I continue to say that my partner was intelligent and thinking if I realised that I was in fact talking to a computer? I might. There are uses of the words ”intelligent” and ”think” that could still have a function. Maybe I took pleasure in trying to trick the computer into reviling itself, but was impressed by the complexity of the software: ”Wow, it is really intelligent!” Or I could say: ”When I write..., it answers back...”, or: ”Look, now I'll make it believe that...”. On the other hand, there are uses of the word ”think” that hardly would be comprehensible anymore: ”What are you thinking, stupid!”, “Where are your thoughts today!”, ”Now, don’t rush it. Take your time and think it through before you answer”, or: ”She’s a real thinker, this one”.
Comparing different cases, we see why we in some circumstances would regard a machine as thinking (etc), whereas this will be utterly pointless in others. Summarising this discussion we may be tempted to draw different conclusions. One may be so impressed by all the ways a computer can be regarded as thinking, that only the words ”the computer is thinking” does justice to his amazement. Another may be less impressed, and summarise the discussion by saying that ”in the end the computer isn’t really thinking”. This is a question of verbal preferences. Now, that might seem like a meager conclusion. Still, it doesn't make the whole AI-discussion futile and senseless. It can help us shed light on questions like, what it is to be an individual, or what it is for someone to have something to say, etc.
This is my take on the AI-philosophy. I am often flabbergasted by the enormous progress made by computer technology, but the philosophically interesting questions lie elsewhere, I think. The big question, to my mind, isn’t what the future may bring in computer software. Some day it may be so good that no-one can tell the difference between the computer and a human being. They may be able to carry out all kinds of complicated tasks and conversations. This will certainly be impressive. And in a robot, one can easily imagine many useful applications of it. We could all have mechanical housemaids, for instance, our own personal shopper or hardworking secretaries. Perhaps they could be put to some very important tasks as well. But could a machine ever substitute a human being in a genuine human interchange? Could a robot replace a therapist in psychotherapy, say, or a friend when we need company? Could computers be our friends? Our confidants? I’m not sure that the answers are all negative -- the swedish tv-series Äkta människor certainly suggested otherwise -- but I am quite sure that normative questions like these, not empirical questions about technology, are at the center of it all. If computers were to pass even the hardest conceivable version of the Turing test, and were able to engage us in sophisticated conversations for hours, there would still remain the question why we should care to listen. What on earth could a machine possibly have to tell us? I think Rush Rhees said something like this, somewhere.
People treat the question of whether AI is possible as if it were a question of what we can make computers do. But I sense deep confusion here. (In what follows I draw on thoughts by Lars Hertzberg.) There is indeed philosophical disagreement about whether computers can think or not, but this isn't really disagreement about what technology is possible or what the future might bring, but a disagreement about what we can say about technology whose actual or possible existence no-one doubts. “Can this computer think or can it not?” The answer to this question does not depend on what the computer is able do, but on what it makes sense for us to say about such performances. If I were fooled by a computer, I would no doubt treat it as I would a thinking agent, at least as long as I was under the impression that I was conversing a human being. Would I continue to say that my partner was intelligent and thinking if I realised that I was in fact talking to a computer? I might. There are uses of the words ”intelligent” and ”think” that could still have a function. Maybe I took pleasure in trying to trick the computer into reviling itself, but was impressed by the complexity of the software: ”Wow, it is really intelligent!” Or I could say: ”When I write..., it answers back...”, or: ”Look, now I'll make it believe that...”. On the other hand, there are uses of the word ”think” that hardly would be comprehensible anymore: ”What are you thinking, stupid!”, “Where are your thoughts today!”, ”Now, don’t rush it. Take your time and think it through before you answer”, or: ”She’s a real thinker, this one”.
Comparing different cases, we see why we in some circumstances would regard a machine as thinking (etc), whereas this will be utterly pointless in others. Summarising this discussion we may be tempted to draw different conclusions. One may be so impressed by all the ways a computer can be regarded as thinking, that only the words ”the computer is thinking” does justice to his amazement. Another may be less impressed, and summarise the discussion by saying that ”in the end the computer isn’t really thinking”. This is a question of verbal preferences. Now, that might seem like a meager conclusion. Still, it doesn't make the whole AI-discussion futile and senseless. It can help us shed light on questions like, what it is to be an individual, or what it is for someone to have something to say, etc.
This is my take on the AI-philosophy. I am often flabbergasted by the enormous progress made by computer technology, but the philosophically interesting questions lie elsewhere, I think. The big question, to my mind, isn’t what the future may bring in computer software. Some day it may be so good that no-one can tell the difference between the computer and a human being. They may be able to carry out all kinds of complicated tasks and conversations. This will certainly be impressive. And in a robot, one can easily imagine many useful applications of it. We could all have mechanical housemaids, for instance, our own personal shopper or hardworking secretaries. Perhaps they could be put to some very important tasks as well. But could a machine ever substitute a human being in a genuine human interchange? Could a robot replace a therapist in psychotherapy, say, or a friend when we need company? Could computers be our friends? Our confidants? I’m not sure that the answers are all negative -- the swedish tv-series Äkta människor certainly suggested otherwise -- but I am quite sure that normative questions like these, not empirical questions about technology, are at the center of it all. If computers were to pass even the hardest conceivable version of the Turing test, and were able to engage us in sophisticated conversations for hours, there would still remain the question why we should care to listen. What on earth could a machine possibly have to tell us? I think Rush Rhees said something like this, somewhere.
Ingen kommentarer:
Legg inn en kommentar