No. It’s just that its something a chatterbot is spectacularly ill-equipped to respond to, unless they’ve been specifically programmed for these sort of things. It’s a meta-instruction, using the properties of the test that are not derived from vocabulary parsing.
The manner in which they fail or succeed is relevant. When I ran Stuart_Armstrong’s sentence on this Web version of ELIZA, for example, it failed by immediately replying:
Perhaps you would like to be human, simply do nothing for 4 minutes, then re-type this sentence you’ve just written here, skipping one word out of 2?
That said, I agree that passing the test is not much of a feat.
If they screw it up somehow, they’re human?
ETA: yes, not any old failure will do.
No. It’s just that its something a chatterbot is spectacularly ill-equipped to respond to, unless they’ve been specifically programmed for these sort of things. It’s a meta-instruction, using the properties of the test that are not derived from vocabulary parsing.
The manner in which they fail or succeed is relevant. When I ran Stuart_Armstrong’s sentence on this Web version of ELIZA, for example, it failed by immediately replying:
That said, I agree that passing the test is not much of a feat.