Turing Tests

                                                            Turing Tests

 

 

The classic Turing test involves a robot that passes for a conscious thinking human being. The examiner spends time with the robot, asking questions, interacting, and the question is whether it presents a convincing enough appearance of intelligence and consciousness. It is like an audition for playing the part of a normal human being. Structurally, however, the Turing test exemplifies something more general, and it is instructive to spell out what it is.

Consider the Turing* test: can we construct a virtual world that passes for a real world? An engineer is making a machine that will feed inputs into the brain and produce an impression of a world of ordinary material objects; the question is whether this virtual world can convince a tester that it is real. The tester can experiment on this virtual world, moving around, varying the angles, using different senses, and if after some suitable time he cannot distinguish the virtual from the real, we can declare that the machine passes the Turing* test. It can produce a convincing simulacrum of a real world—as a robotics engineer might produce a convincing simulacrum of a conscious intelligence.

            We could also envisage a Turing** test that concerns producing artificial plant life: can we make an object that resembles a naturally occurring plant enough to convince someone that it is really a biological plant? And we can have subdivisions of such questions: can we artificially simulate a virus, a bat, a cactus, or an octopus? The question is not specific to robots and minds at all: it is about the power to mimic naturally occurring objects by artificial contrivance. Can we make an artificial F, for arbitrary F?

            Here is an interesting question of the general type—call it the super-Turing test: Can we create a virtual world that contains robots that pass the classic Turing test? That is, we first have to create a virtual world of bodies, as in the Turing* test, and then we have to ensure that those virtual bodies behave in ways that perfectly mimic human bodies—so that they will pass the Turing test. Thus virtual robots may pass the super-Turing test, and hence be declared by testers to be conscious thinking beings. The tester has been fooled into believing he is surrounded by conscious thinking beings when he is really living in a virtual world of imaginary robots.

            Suppose the virtual robots do pass the super-Turing test test: are they then really conscious? But how can a merely virtual being be conscious, as opposed to seeming so? Are people in your dreams conscious? Clearly not—though they pass a kind of super-Turing test. Passing the Turing test is not logically sufficient to qualify as conscious, because passing the super-Turing test is not sufficient. Passing the test is enough to convince someone that there is a real thing of the type in question, if they don’t know the actual nature of thing; but that is a question about evidence and belief, not about what is metaphysically possible. Anything can pass a Turing-type test for being an F but still not be an F, since appearing to be an F is never logically sufficient for being an F. In other words, there is always skepticism to reckon with.

 

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *