Consider first of all a machine that can pass an “AI-focused Turing test”, by which I mean convincing one of the AI team that built it that it’s a human being with a comparable level of AI expertise.
I suggest that such a machine is almost certainly “sufficient unto FOOM”, if the judge in the test is allowed to go into enough detail.
An ordinary Turing test doesn’t require the machine to imitate an AI expert but merely a human being. So for a “merely” Turing-passing AI not to be “sufficient unto FOOM” (at least as I understand that term) what’s needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert.
It seems unlikely that there’s a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn’t close to being FOOM-ready, it seems like what’s needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn’t “scale up” to harder problems like the ordinary human architecture apparently does.
Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can’t be entirely ruled out, but it feels much harder to believe than a “narrow” chess-playing AI was even at the time of Hofstadter’s prediction.
Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it’s a physicist, competent literary critics that it’s a novelist, civil rights activists that it’s a black person who’s suffered from racial discrimination, etc.
Consider first of all a machine that can pass an “AI-focused Turing test”, by which I mean convincing one of the AI team that built it that it’s a human being with a comparable level of AI expertise.
I suggest that such a machine is almost certainly “sufficient unto FOOM”, if the judge in the test is allowed to go into enough detail.
An ordinary Turing test doesn’t require the machine to imitate an AI expert but merely a human being. So for a “merely” Turing-passing AI not to be “sufficient unto FOOM” (at least as I understand that term) what’s needed is that there should be a big gap between making a machine that successfully imitates an ordinary human being, and making a machine that successfully imitates an AI expert.
It seems unlikely that there’s a very big gap architecturally between human AI experts and ordinary humans. So, to get a machine that passes an ordinary Turing test but isn’t close to being FOOM-ready, it seems like what’s needed is a way of passing an ordinary Turing test that works very differently from actual human thinking, and doesn’t “scale up” to harder problems like the ordinary human architecture apparently does.
Given that some machines have been quite successful in stupidly-crippled pseudo-Turing tests like the Loebner contest, I suppose this can’t be entirely ruled out, but it feels much harder to believe than a “narrow” chess-playing AI was even at the time of Hofstadter’s prediction.
Still, I think there might be room for the following definition: the strong Turing test consists of having your machine grilled by several judges, with different domains of expertise, each of whom gets to specify in broad terms (ahead of time) what sort of human being the machine is supposed to imitate. So then the machine might need to be able to convince competent physicists that it’s a physicist, competent literary critics that it’s a novelist, civil rights activists that it’s a black person who’s suffered from racial discrimination, etc.