I don’t have a way to set up a proper Turing test, obviously. I’m just saying that these responses are not what a human would say.
GPT-4 might pass it in the strict definition, based on the current trajectory, but I’m afraid it might be too late at that point.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious. I view properly passing the Turing test to be a harder task than killing everyone and taking over the world. If the AI doomers are right (and they might be), then I expect to never see an AI that passes the Turing test.
Which is why it is weird and annoying when people say current LLMs pass it.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious.
Well, if you say so.
The purpose of the Turing test was not to revel in human tester’s ability to still be able to distinguish between the AI and the human generator (you seem to find pride in the fact that you would not be fooled even if you didn’t know Charlotte was an AI—great, you can pat yourself on the back, but that is not the purpose of the test, this is not a football match). It was to measure how close the AI is getting to human level cognitive abilities, from the conversational side of things, to gauge the closeness of the events the “AI doomers” are preaching about. In that sense, the mere increase in difficulty in reliably conducting Turing tests would inform us of the progress rate, and it’s undeniable that it’s getting exponentially better; regardless of whether you think they will eventually pass the test 100% in all conditions given unlimited test time with human testers as sophisticated as yourself.
I don’t have a way to set up a proper Turing test, obviously. I’m just saying that these responses are not what a human would say.
GPT-4 will also not pass a properly-run Turing test, and this is also obvious. I view properly passing the Turing test to be a harder task than killing everyone and taking over the world. If the AI doomers are right (and they might be), then I expect to never see an AI that passes the Turing test.
Which is why it is weird and annoying when people say current LLMs pass it.
Well, if you say so.
The purpose of the Turing test was not to revel in human tester’s ability to still be able to distinguish between the AI and the human generator (you seem to find pride in the fact that you would not be fooled even if you didn’t know Charlotte was an AI—great, you can pat yourself on the back, but that is not the purpose of the test, this is not a football match). It was to measure how close the AI is getting to human level cognitive abilities, from the conversational side of things, to gauge the closeness of the events the “AI doomers” are preaching about. In that sense, the mere increase in difficulty in reliably conducting Turing tests would inform us of the progress rate, and it’s undeniable that it’s getting exponentially better; regardless of whether you think they will eventually pass the test 100% in all conditions given unlimited test time with human testers as sophisticated as yourself.