The AI can’t perform human-like motions because it doesn’t have a human-like body, but the test isn’t supposed to penalize it for that. That’s why the test is done through text-only chat and not in person.
Upload a video and have it identify the puppy. You don’t need a body to do that.
Of course—that’s what I meant. I was responding to your words,
For many actions (eg 3d motion), the first is much easier than the second.
And by “3d motion” I thought you meant the way humans can instinctively move their own bodies to throw or catch a ball, but can’t explicitly solve the equations that define its flight.
If a language-optimised AI could control manipulators well enough to catch balls, that would indeed be huge evidence of general intelligence (maybe send them a joystick with a usb port overide—the human grasps the joystick, the AI controls it electronically).
Given a 3d world model, predicting the ball’s trajectory and finding an intercept point is very simple for a computer. The challenge is to turn sensory data into a suitable world model. I think there are already narrow AIs which can do this.
But this seems unrelated to speech production or recognition, or the other abilities needed to pass a classic Turing test. I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.
It seems likely to me (although I am not an expert or even a well informed layman) that almost any human-built AI design will have many modules dedicated to specific important tasks, including visual recognition and a 3d world model that can predict simple movement. It wouldn’t actually solve such problems using its general intelligence (or its language modules) from first principles.
I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.
That’s precisely why the origin of the AI is so important—it’s only if the general AI developed these skills without bolt-ons, that we can be sure it’s a real general intelligence.
That’s a sufficient condition, but I don’t think it’s a necessary one—it’s not only then that we’ll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.
Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven’t succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn’t have to be… well, superhumanly intelligent.
Yes—my test criteria are unfair to the AI (arguably the Turing test is as well). I can’t think of methods that have good specificity as well as sensitivity.
We do have a general intelligence. Without it we’d be just smart chimps.
But in most fields where we have a dedicated module—visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation—our GI couldn’t begin to replace it. And we haven’t been able to easily create equivalent algorithms (and the problems aren’t just computing power).
Upload a video and have it identify the puppy. You don’t need a body to do that.
Of course—that’s what I meant. I was responding to your words,
And by “3d motion” I thought you meant the way humans can instinctively move their own bodies to throw or catch a ball, but can’t explicitly solve the equations that define its flight.
If a language-optimised AI could control manipulators well enough to catch balls, that would indeed be huge evidence of general intelligence (maybe send them a joystick with a usb port overide—the human grasps the joystick, the AI controls it electronically).
Given a 3d world model, predicting the ball’s trajectory and finding an intercept point is very simple for a computer. The challenge is to turn sensory data into a suitable world model. I think there are already narrow AIs which can do this.
But this seems unrelated to speech production or recognition, or the other abilities needed to pass a classic Turing test. I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.
It seems likely to me (although I am not an expert or even a well informed layman) that almost any human-built AI design will have many modules dedicated to specific important tasks, including visual recognition and a 3d world model that can predict simple movement. It wouldn’t actually solve such problems using its general intelligence (or its language modules) from first principles.
But again, this is just speculation on my part.
That’s precisely why the origin of the AI is so important—it’s only if the general AI developed these skills without bolt-ons, that we can be sure it’s a real general intelligence.
That’s a sufficient condition, but I don’t think it’s a necessary one—it’s not only then that we’ll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.
Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven’t succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn’t have to be… well, superhumanly intelligent.
Yes—my test criteria are unfair to the AI (arguably the Turing test is as well). I can’t think of methods that have good specificity as well as sensitivity.
On the other hand, we’re perfectly capable of acquiring skills that we didn’t evolve to possess, e.g., flying planes.
We do have a general intelligence. Without it we’d be just smart chimps.
But in most fields where we have a dedicated module—visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation—our GI couldn’t begin to replace it. And we haven’t been able to easily create equivalent algorithms (and the problems aren’t just computing power).