I don’t know why they’re calling this the “first time”
In 1972 bots were able to convince trained professionals that they were human schizophrenics:
Kenneth Colby created PARRY in 1972, a program described as “ELIZA with attitude”.[28] It attempted to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. In order to validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the “patients” were human and which were computer programs.[29] The psychiatrists were able to make the correct identification only 48 percent of the time — a figure consistent with random guessing.[30]
foreign 13 year old who isn’t being challenged is a low bar to pass.
a bot which posts bellow youtube videos and does nothing but spew racial abuse and “lol” would be indistinguishable from the 13 year old humans doing the same thing so would technically pass the turing test.
I’ll be much more interested when it can convince a group of professionals that it’s another professional in their field, much more useful.
I used to play a MUD that had a chatbot on it for months in the late 1990s before the people running the game found out and kicked “him” off for violation of the no-bots rule. The chatbot used one specific group chat line and acted somewhat like the hypothetical video poster—mild verbal insults that weren’t quite nasty enough to justify complaining to admin about, potty humor, “shut up [name]” and similar responses to questions, and other behaviors that were believably how a middle-school-aged player with trollish intentions might act.
Lowering the standard of the chatbot’s expected conversational level by giving it the persona of a child or early adolescent speaking in different language than his/her first language does seem like a form of cheating while following the letter of the rules. At a minimum, I’d like to see the chatbot pass as an ordinary adult of at least average intelligence who is a native speaker of the language that the test is conducted in. A fellow professional in a given field would be even better.
I don’t know why they’re calling this the “first time”
In 1972 bots were able to convince trained professionals that they were human schizophrenics:
foreign 13 year old who isn’t being challenged is a low bar to pass.
a bot which posts bellow youtube videos and does nothing but spew racial abuse and “lol” would be indistinguishable from the 13 year old humans doing the same thing so would technically pass the turing test.
I’ll be much more interested when it can convince a group of professionals that it’s another professional in their field, much more useful.
I used to play a MUD that had a chatbot on it for months in the late 1990s before the people running the game found out and kicked “him” off for violation of the no-bots rule. The chatbot used one specific group chat line and acted somewhat like the hypothetical video poster—mild verbal insults that weren’t quite nasty enough to justify complaining to admin about, potty humor, “shut up [name]” and similar responses to questions, and other behaviors that were believably how a middle-school-aged player with trollish intentions might act.
Lowering the standard of the chatbot’s expected conversational level by giving it the persona of a child or early adolescent speaking in different language than his/her first language does seem like a form of cheating while following the letter of the rules. At a minimum, I’d like to see the chatbot pass as an ordinary adult of at least average intelligence who is a native speaker of the language that the test is conducted in. A fellow professional in a given field would be even better.