Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.
There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.
Spending a lot of time trying to fool humans into thinking that a machine can empathize with them seems almost counterproductive. I’d rather the AIs honestly relate what they are experiencing, rather than try to pretend to be human.
The test is a response to the Problem Of Other Minds.
Simply, no other test will be accepted by people that [insert something non human here] is genuinely intelligent.
The reasoning goes: strictly speaking the problem of other minds applies to other humans as well but we politely assume that the humans we’re talking to are genuinely intelligent or at least conscious on little more than the basis that we’re talking to them and they’re talking back like conscious human beings.
the longer and more involved the test the harder it is to use tricks to fake genuine intelligence.
Honestly, when I read the original essay, I didn’t see it as being intended as a test at all—more as an honorable and informative intuition pump or thought experiment.
Is the Turing Test really all that useful or important? I can easily imagine an AI powerful beyond any human intelligence that would still completely fail a few minutes of conversation with an expert.
There is so much about the human experience which is very particular to humans. Is creating an AI with a deep understanding of what certain subjective feelings are like, or niceties of social interaction? Yes, an FAI eventually needs to have complete knowledge of those, but the intermediate steps may be quite alien and mechanical, even if intelligent.
Spending a lot of time trying to fool humans into thinking that a machine can empathize with them seems almost counterproductive. I’d rather the AIs honestly relate what they are experiencing, rather than try to pretend to be human.
The test is a response to the Problem Of Other Minds.
Simply, no other test will be accepted by people that [insert something non human here] is genuinely intelligent.
The reasoning goes: strictly speaking the problem of other minds applies to other humans as well but we politely assume that the humans we’re talking to are genuinely intelligent or at least conscious on little more than the basis that we’re talking to them and they’re talking back like conscious human beings.
the longer and more involved the test the harder it is to use tricks to fake genuine intelligence.
It did seem like a useful tool for measuring (some types of) intelligence. Since it doesn’t work, it would be useful to have a substitute...
Honestly, when I read the original essay, I didn’t see it as being intended as a test at all—more as an honorable and informative intuition pump or thought experiment.
In other words, agreed.