I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.For example, what if I told you that in reality, I don’t speak a word of English ? Whom are you going to believe—me, or your lying eyes ?
I’m willing to go even farther out on a limb here, and claim that all the serious objections that I’ve seen so far are either incoherent, or presuppose some form of dualism—which is likewise incoherent. They all boil down to saying, “No matter how closely a machine resembles a human, it will never be truly human, because true humans have souls/qualia/consciousness/etc. We have no good way of ever detecting these things or even fully defining what they are, but come on, whom are you gonna believe ? Me, or your lying eyes ?”
Well, I posted the link mostly as a joke, but we can take a serious lesson from it: yes, maybe I would believe you; it would depend. If you told me “I don’t speak English”, but then showed no sign of understanding any questioning in English, nor ever showed any further ability to speak it… then… yeah, I’d lean in the direction of believing you.
Of course if you tell me “I don’t speak English” in the middle of an in-depth philosophical discussion, carried on in English, then no.
But a sufficiently carefully constructed agent could memorize a whole lot of sentences. Anyway, this is getting into GAZP vs. GLUT territory, and that’s being covered elsewhere in the thread.
I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.
What’s the reasoning here? This is the sort of thing that seems plausible in many cases but the generality of the claim sets off alarm bells. Is it really true that we never care about the source of a behavior over and above the issue of, say, predicting that behavior?
I’m willing to go even farther out on a limb here, and claim that all the serious objections that I’ve seen so far are either incoherent, or presuppose some form of dualism—which is likewise incoherent.
Well, this isn’t quite the issue. No one is objecting to the claim that machines can be people (as, I think Dennet aptly said, this would be surprising given that people are machines). Indeed, its out of our deep interest in that possibility that we made this mistake about Turing tests: I for one would like to be forgiven for being blind to the fact that all the Turing test can tell us is whether or not a certain property (defined entirely in terms of the test) holds of a certain system. I had no antecedent interest in that property, after all. What I wanted to know is ‘is this machine a person’, eagerly/fearfully awaiting the day that the answer is ‘yes!’.
You may be right that my question ‘is this machine a person’ is incoherent in some way. But it’s surprising that the Turing test involves such a serious philosophical claim.
Is it really true that we never care about the source of a behavior over and above the issue of, say, predicting that behavior?
I wasn’t intending to make a claim quite that broad; but now that you mention it, I am going to answer “yes”—because in the process of attempting to predict the behavior, we will inevitably end up building some model of the agent. This is no different from predicting the behaviors of, say, rocks.
If I see an object whose behavior is entirely consistent with that of a roughly round rock massing about 1kg, I’m going to go ahead and assume that it’s a round-ish 1kg rock. In reality, this particular rock may be an alien spaceships in disguise, or in fact all rocks could be alien spaceships in disguise, but I’m not going to jump to that conclusion until I have some damn good reasons to do so.
You may be right that my question ‘is this machine a person’ is incoherent in some way. But it’s surprising that the Turing test involves such a serious philosophical claim.
My point is not that the Turing Test is a serious high-caliber philosophical tool, but rather that the question “is this agent a person” is a lot simpler than philosophers make it out to be.
I do agree with Turing on this one. What matters is how an agent acts, not what powers its actions.For example, what if I told you that in reality, I don’t speak a word of English ? Whom are you going to believe—me, or your lying eyes ?
I’m willing to go even farther out on a limb here, and claim that all the serious objections that I’ve seen so far are either incoherent, or presuppose some form of dualism—which is likewise incoherent. They all boil down to saying, “No matter how closely a machine resembles a human, it will never be truly human, because true humans have souls/qualia/consciousness/etc. We have no good way of ever detecting these things or even fully defining what they are, but come on, whom are you gonna believe ? Me, or your lying eyes ?”
http://www.youtube.com/watch?v=dd0tTl0nxU0
I remember hearing the story of a mathematical paper published in English but written by a Frenchmen, containing the footnotes:
1 I am grateful to professor Littlewood for helping me translate this paper into English.2
2 I am grateful to professor Littlewood for helping me translate this footnote into English.3
3 I am grateful to professor Littlewood for helping me translate this footnote into English.
Why was no fourth footnote necessary?
So… the answer is… if I told you I don’t speak any English, you’d believe me ? Not sure what your point is here.
Well, I posted the link mostly as a joke, but we can take a serious lesson from it: yes, maybe I would believe you; it would depend. If you told me “I don’t speak English”, but then showed no sign of understanding any questioning in English, nor ever showed any further ability to speak it… then… yeah, I’d lean in the direction of believing you.
Of course if you tell me “I don’t speak English” in the middle of an in-depth philosophical discussion, carried on in English, then no.
But a sufficiently carefully constructed agent could memorize a whole lot of sentences. Anyway, this is getting into GAZP vs. GLUT territory, and that’s being covered elsewhere in the thread.
There are already quite a few comments on this post—do you have a link to the thread in question ?
http://lesswrong.com/lw/hgl/the_flawed_turing_test_language_understanding_and/90rl
What’s the reasoning here? This is the sort of thing that seems plausible in many cases but the generality of the claim sets off alarm bells. Is it really true that we never care about the source of a behavior over and above the issue of, say, predicting that behavior?
Well, this isn’t quite the issue. No one is objecting to the claim that machines can be people (as, I think Dennet aptly said, this would be surprising given that people are machines). Indeed, its out of our deep interest in that possibility that we made this mistake about Turing tests: I for one would like to be forgiven for being blind to the fact that all the Turing test can tell us is whether or not a certain property (defined entirely in terms of the test) holds of a certain system. I had no antecedent interest in that property, after all. What I wanted to know is ‘is this machine a person’, eagerly/fearfully awaiting the day that the answer is ‘yes!’.
You may be right that my question ‘is this machine a person’ is incoherent in some way. But it’s surprising that the Turing test involves such a serious philosophical claim.
I wasn’t intending to make a claim quite that broad; but now that you mention it, I am going to answer “yes”—because in the process of attempting to predict the behavior, we will inevitably end up building some model of the agent. This is no different from predicting the behaviors of, say, rocks.
If I see an object whose behavior is entirely consistent with that of a roughly round rock massing about 1kg, I’m going to go ahead and assume that it’s a round-ish 1kg rock. In reality, this particular rock may be an alien spaceships in disguise, or in fact all rocks could be alien spaceships in disguise, but I’m not going to jump to that conclusion until I have some damn good reasons to do so.
My point is not that the Turing Test is a serious high-caliber philosophical tool, but rather that the question “is this agent a person” is a lot simpler than philosophers make it out to be.