It happens once in a while in online chat, where real people behave like chatterbots. Granted, with some probing it is possible to tell the difference, if the suspect stays connected long enough, but there were cases where I was not quite sure. In one case the person (I’m fairly sure it was a person) posted links to their own site and generic sentences like “there are many resources on ”. In another case the person was posting online news snippets and reacted with hostility to any attempts to engage, which is a perfect way to discourage Turing probing if you design a bot.
Imagine a normal test, perhaps a math test in a classroom. Someone knows math but falls asleep and doesn’t answer any of the questions. As a result, they fail the test. Would you say that the test can’t detect whether someone can do math?
Technically, that’s correct. The test doesn’t detect whether someone can do math; it detects whether they are doing math at the time. But it would be stupid to say “hey, you told me this tests if someone can do math! It doesn’t do that at all!” The fact that they used the words “can do” rather than “are doing at the time” is just an example of how human beings don’t use language like a machine, and objecting to that part is pointless.
Likewise, the fact that someone who acts like a chatterbot is detected as a computer by the Turing test does mean that the test doesn’t detect whether someone is a computer. It detects whether they are acting computerish at the time. But “this test detects whether someone is a computer” is how most people would normally describe that, even if that’s not technically accurate. It’s pointless to object that the test doesn’t detect whether someone is a computer on those grounds.
It doesn’t even test whether someone’s doing math at the time. I could be doing all kinds of math and, in consequence, fail the exam.
I would say, rather, that tests generally have implicit preconditions in order for interpretations of their results to be valid.
Standing on a scale is a test for my weight that presumes various things: that I’m not carrying heavy stuff, that I’m not being pulled away from the scale by a significant force, etc. If those presumptions are false and I interpret the scale readings normally, I’ll misjudge my weight. (Similarly, if I instead interpret the scale as a test of my mass, I’m assuming a 1g gravitational field, etc.)
Taking a math test in a classroom makes assumptions about my cognitive state—that I’m awake, trying to pass the exam, can understand the instructions, don’t have a gerbil in my pants, and so forth.
It happens once in a while in online chat, where real people behave like chatterbots. Granted, with some probing it is possible to tell the difference, if the suspect stays connected long enough, but there were cases where I was not quite sure. In one case the person (I’m fairly sure it was a person) posted links to their own site and generic sentences like “there are many resources on ”. In another case the person was posting online news snippets and reacted with hostility to any attempts to engage, which is a perfect way to discourage Turing probing if you design a bot.
Imagine a normal test, perhaps a math test in a classroom. Someone knows math but falls asleep and doesn’t answer any of the questions. As a result, they fail the test. Would you say that the test can’t detect whether someone can do math?
Technically, that’s correct. The test doesn’t detect whether someone can do math; it detects whether they are doing math at the time. But it would be stupid to say “hey, you told me this tests if someone can do math! It doesn’t do that at all!” The fact that they used the words “can do” rather than “are doing at the time” is just an example of how human beings don’t use language like a machine, and objecting to that part is pointless.
Likewise, the fact that someone who acts like a chatterbot is detected as a computer by the Turing test does mean that the test doesn’t detect whether someone is a computer. It detects whether they are acting computerish at the time. But “this test detects whether someone is a computer” is how most people would normally describe that, even if that’s not technically accurate. It’s pointless to object that the test doesn’t detect whether someone is a computer on those grounds.
It doesn’t even test whether someone’s doing math at the time. I could be doing all kinds of math and, in consequence, fail the exam.
I would say, rather, that tests generally have implicit preconditions in order for interpretations of their results to be valid.
Standing on a scale is a test for my weight that presumes various things: that I’m not carrying heavy stuff, that I’m not being pulled away from the scale by a significant force, etc. If those presumptions are false and I interpret the scale readings normally, I’ll misjudge my weight. (Similarly, if I instead interpret the scale as a test of my mass, I’m assuming a 1g gravitational field, etc.)
Taking a math test in a classroom makes assumptions about my cognitive state—that I’m awake, trying to pass the exam, can understand the instructions, don’t have a gerbil in my pants, and so forth.