Viser innlegg med etiketten empiri. Vis alle innlegg
Viser innlegg med etiketten empiri. Vis alle innlegg

torsdag 4. oktober 2012

Talking (about) robots.

Trying to imagine a talking robot, the question “why listen?” always seems to pop up. Not because I doubt that lending a robot my ear may provide useful information in some cases. Nor because listening to a robot with nothing but factual information to offer soon will become boring, for I can easily imagine a robot passing the Turing test, which, essentially, is a way to test the robot’s conversation skills. If a robot can fool a human interlocutor into believing that he is talking to another human being, then this robot is said to possess artificial intelligence. Listening to a well-spoken robot like this certainly could be entertaining for a while, I am not denying that either, but entertainment isn’t always what we look for in a conversation. What I question, as I have said before (here and here), is whether an automaton, no matter how sophisticated, could ever tell us anything. In what sense can a robot ever be said to "know" something? Can a robot ever carry a burden or a secret it just needs to get off its chest, or wouldn't reveal even if life depended on it? Can a robot ever be said to express anything? A sentiment, an opinion -- anything -- that could move us or provoke us to react, with anger or pity, say? Could an automaton ever be called wise, or deep, or profound, or perhaps shallow, boring, childish, stubborn or pig-headed?

These are some of the questions underlying my broader “why listen?”-question. Perhaps I could rephrase my concerns with much AI philosophy and the Turing test thus: Would a positive result on a Turing test prove that robots are becoming more human-like? The answer, obviously, depends on what we mean by “human-like”. Such a result would certainly prove that robots could be epistemologically hard to distinguish from human beings in certain respects and in certain contexts. But that conclusion is rather trivial, philosophically speaking. My impression is that most AI engineers are philosophically more ambitious on behalf of their field. The driving force behind much of the research, it seems to me, is the dream of creating robots so sophisticated that we will feel forced to relate to them in a human-like fashion too.

As computer and robot technology evolves, automatons are getting better and better at mimicking human behaviour, and some day this may result in an entirely new situation: Some day we may be quite unable to tell the difference any more. What then? This question underlies much AI philosophy.

It may be true that technology is pushing in that direction. Some day robots may be so sophisticated that even the hardest Turing test wouldn’t unveil them. I am not questioning this (though I do find some of the wilder scenarios...well...wild, and don't think this is likely to become a reality any day soon). My objection, rather, is that even if it should become impossible for us to tell the difference between a human-to-human and a human-to-robot intercourse, it is far from evident that we also should stop seeing a difference here. That’s obscure; so let me put it like this: Empirically speaking we may be unable to tell whether we are talking to an automaton or another human being, but that doesn’t mean we would altogether stop distinguishing between, say, artificial and genuine conversations. That may happen -- it is a possible scenario -- but it doesn’t automatically follow. The distinction between the false and the real, the artificial and the genuine would undoubtedly play a very different role in peoples lives under such circumstances, but it isn’t self-evident that it would play no role at all.

That is the impression I sometimes get from reading arguments about the Turing test. The test is supposed to settle how we should view human-computer interaction. As long as we can see the difference, there really is an important difference here; but should the differences become invisible to us, then many of the distinctions we make between man and machine simply would evaporate. If a computer can fool us into treating it as an intelligent creature because we think it is an intelligent creature, then this truly proves computer intelligence, and we should perhaps stop talking about having been fooled altogether, and instead accept the computer as an intelligent interlocutor and regard our conversation with it as a genuine exchange of ideas. But this seems confused to me. This question cannot be determined by the Turing or any other empirical test. Whether computers are becoming more human-like in this sense, isn't an empirical question at all. How we relate to automatons is largely a normative question, not simply a question about computer sophistication. If this were true, a positive result on a Turing test would oblige us to engage with this computer in similar manners next time too, even though we now know that it is (only) a computer. That sounds like an odd obligation to me.

Let us imagine a Turing test situation turning into an argument. Say I present Ingmar Bergman as my all time favourite movie director, whereas my interlocutor prefers Woody Allen. Under normal circumstances I wouldn’t hesitate to call this a disagreement. But if my interlocutor were a computer I would find that description...awkward, if nothing else. I am not saying that I would have a feeling of awkwardness while arguing (this computer is too sophisticated to give itself away like this). What I am saying, though, is that the idea of disagreeing or arguing with a computer sounds strange to me, the complexity of the computer notwithstanding. (If you prefer Allen to Bergman, I may disagree with you. Perhaps it would be hard for me to let go; I may brood on our quarrel days on end; think about arguments I used or failed to use; I may bring the issue up again next time we meet, and so on; but could I disagree with a computer in any way resembling this?) Imagine that you were looking over my shoulder knowing everything about my interlocutor. Perhaps you would describe my behaviour as “disagreeing or arguing with a computer,” but wouldn’t you also describe my anger and frustration as somewhat amusing and misplaced? I think I would if I were in your shoes -- or perhaps I would feel a little sorry for this person struggling to talk sense to a computer. And when the truth finally was revealed to me, wouldn’t I feel a little foolish? I would perhaps congratulate the manufacturers of the computer on their great accomplishment, but the feeling of having been fooled would hardly go away. In one sense I think it shouldn't go away, either. What sense is that? In the sense that I would be making a fool of myself were I to continue the argument knowing my interlocutor to be a computer.

søndag 14. november 2010

Rimeliggjøring av Guds eksistens.

Den seneste uken har jeg hatt en filosofisk meningsutveksling med professor Jean Kazez på hennes blogg In Living Color. Spørsmålet er om det kan finnes tenkelige empiriske hendelser som bør få alle rimelige mennesker (reasonable men) til å tenke at det må finnes en ånd med i det minste noen av Guds egenskaper.

Suppose this morning you found out that every token of "Woody Allen" in every book and magazine, worldwide, had turned green. This pattern of events is physically inexplicable (too spread out, too fast, to be physically explained), but coherent, meaning-wise. In other words, it's the sort of thing a Someone might wish for (if they happened to have a thing about Woody Allen).
If that actually happened, it would be reasonable to think a Mind must have willed it, thereby making it so. An immaterial mind that makes things happen through sheer willing is...maybe God.

Dette er tankeeksperimentet. Og konklusjonen kan virke selvfølgelig. Å komme opp med alternative forklaringer er slett ikke enkelt. Men jeg stilte Kazez et par spørsmål. For det første: vi er tilbøyelige til å lese rapportene som en hendelse; men er det opplagt? Kanskje det dreier seg om en rekke urelaterte hendelser. Kanskje har vi ikke behov for én, men for mange forklaringer. At så mange usannsynlige hendelser falt samme natt var kanskje bare en gigantisk tilfeldighet. Dette svekker muligens vår første innskytelse...

Men sett at det ble fastslått (hvordan aner jeg ikke) at hendelsene var relaterte, slik Kazez' hensikt syntes å være, hva da? Bør konklusjonen da være at en ånd sto bak? Det var det vår meningsutveksling dreide seg om. (Diskusjonen for øvrig konsentrerte seg om andre spørsmål.) Jeg stilte Kazez, noe jeg oppfatter som et viktig spørsmål -- hvordan fortsetter historien hennes?

Jeg benekter ikke at ånder (kan) finnes; ikke engang at det skulle være forklaringen i dette tilfellet. Men spørsmålet er om slike rapporter rimeliggjør en slik konklusjon. Om én, riktignok en fantastisk omfattende og finurlig, empirisk hendelse kan rimeliggjøre påstanden at "a Mind must have willed it, thereby making it so." Vår trang til å si noe sånt kan være sterk -- kanskje ville også jeg føle det slik -- men spørsmålet er om det ville være rimelig å gi etter for trangen. Jeg er tviler. Jeg er tviler på om skillet mellom rimelig og urimelig kan avgjøres basert på den historien Kazez forteller. Det avhenger av hvordan historien fortsetter.

Hvis mystiske hendelser fortsatte, ville det kanskje rimeliggjøre konklusjonen. Kanskje ville vi på et tidspunkt finne det urimelig å lete etter andre forklaringer. Kazez så poenget om at den ene hendelsen ikke var stort å basere seg på, men mente likevel at hendelsene i seg var så ekstraordinære at det rimeliggjorde konklusjonen. Men hva om det ble med denne ene mystiske episoden? Etter disse grønne Woody Allen'ene skjer ingenting. Virker konklusjonen da like sterk?

Her kan det være tjenlig å få hendelsene litt på avstand. I det opprinnelige tankeeksperimentet blir vi bedt om å forestille oss at vi hører at noe fantastisk har skjedd i løpet av natten. Men hva ville folk tenke om hundre eller to hundre år om disse grønne Woody Allen'ene dersom ingenting som dette noensinne skjedde igjen? De ville neppe ha vår steke trang til å si at en ånd måtte ha stått bak. Ville de uansett ha en rimelig grunn til å tro at en slik ånd eksisterer? Eller i det minste må ha eksistert den ene gangen for alle de årene siden? Ville de oppfatte rapportene som bevis eller kun som anekdoter -- omtrent slik vi betrakter middelalderens rapporterte mirakler?

Vi kan selvsagt ikke vite svaret. Men disse ulike narrativene synes å kunne avsvekke ønsket om å trekke Kazez' konklusjon i dette tilfellet, og mer generelt ønsket om å trekke metafysisk-teologiske konklusjoner på empiriske, for ikke å snakke om én empirisk hendelse.