The whole embodied cognition thing is a massive, elementary mistake as bad as all the ones that Eliezer has analysed in the Sequences. It’s an instant fail.
Can you expand on this just a bit? I am leaning, slowly, in the same direction, and I’d like a bit of a sanity check on this claim.
Firstly, I have no problem with the “embodied cognition” idea so far as it relates to human beings (or animals, for that matter). Yes, people think also with their bodies, store memories in the environment, point at things, and so on. This seems to me both true and unremarkable. So unremarkable as to hardly be worth the amount of thought that apparently goes into it. While it may be interesting to trace out all the ways in which it happens, I see no philosophical importance in the details.
Where it goes wrong is the application to AGI that says that because people do this, it is an essential part of how an intellgence of any sort must operate, and therefore a man-made intelligent machine must be given a body. The argument mistakes a superficial fact about observed intelligences for a fact about the mechanism whereby an intelligence of any sort must operate. There is a large and expanding body of work on making ever more elaborate robot puppets like the Nao, explicitly following a research programme of developing “embodied cognition”.
I cannot see these projects as being of any interest. I would be a lot more interested in seeing someone build a human-sized robot that can run unsupported on two legs (Boston Dynamics’ ATLAS is getting there), especially if it can run faster than a man while carrying a full military pack and isn’t tethered to a power cable (not yet done). However, nothing like that is a prerequisite to AGI. I do hold a personal opinion, which I’m not going to argue for here, that if someone developed a simple method of solving the control problems of an all-terrain running robot, they might get from that some insight into how to get farther, such as an all-terrain running robot that can hunt down humans trying to avoid it. Of course, the Unfriendly directions that might lead are obvious, as are the military motivations for building such machines, or inviting people to come up with designs. Of course, these powers will only be used for Good.
Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
The deaf communicate without sound, the blind without sight, and the limbless without pointing hands. On the internet people communicate without any of these. It doesn’t seem to hold anyone up, except in the mere matter of speed in the case of Stephen Hawking communicating by twitching cheek muscles.
Ah, no, the magic ingredient must be society! Cognition always takes place within society. Feral children are developmentally disabled for want of society. The evidence is clear: we must develop societies of AIs before they can be intelligent.
No, it’s language they must have! AGIs cognition must be based on a language. So if we design the perfect language, AGI will be a snap.
No, it’s upbringing they must have! So we’ll design a robot to be initially like a newborn baby and teach it through experience!
No, it’s....
No. The general form of all these arguments is broken.
Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
This is where you lose me. Isn’t that an equally effective argument against AGI in general?
Isn’t that an equally effective argument against AGI in general?
“AGI in general” is a thing of unlimited broadness, about which lack of success so far implies nothing more than lack of success so far. Cf. flying machines, which weren’t made until they were. Embodied cognition, on the other hand, is a definite thing, a specific approach that is at least 30 years old, and I don’t think it’s even made a contribution to narrow AI yet. It is only mentioned in Russell and Norvig in their concluding section on the philosophy of Strong AI, not in any of the practical chapters.
I took RichardKennaway’s post to mean something like the following:
“Birds fly by flapping their wings, but that’s not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that’s not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently.”
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it’s fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don’t call it “AI”. Even in Turing’s day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of ‘toy projects’ that you use. For the sort of broad definition you are using, where ‘toy projects’ refers literally to toys, Richard Kennaway’s original claim that the embodied approach had only produced toys is factually incorrect. For the definition of ‘toy projects’ that both Richard Kennaway and Document are using, in which ‘toy projects’ is more closely related to ‘toy models’- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?
The “magic ingredient” may be a bridging of intuitions: an embodied AI which you can more naturally interact with offers more intuitive metrics for progress; milestones which can be used to attract funding since they make more sense intuitively.
Obviously you can build an AGI using only lego stones. And you can build an AGI “purely” as software (i.e. with variable hardware substrates). The steelman for pursuing embodied cognition would not be “embodiment is strictly necessary to build AGIs” (boring!), but that “given humans with a goal of building an AGI, going the embodiment route may be a viable approach”.
I well remember that early morning in the CS lab, the better part of a decade ago, when I stumbled—still half asleep—into a sideroom to turn on the lights, only to stare into the eye of Eccerobot (in an earlier incarnation), which was visiting our lab. Shudder.
I used to joke that my goal in life would be to build the successor creature, and to be judged by it (humankind and me both). To be judged and to be found unworthy in its (in this case single) eye, and to be smitten. After all, what better emotional proof to have created something of worth is there than your creation judging you to be unworthy? Take my atoms, Adambot!
Can you expand on this just a bit? I am leaning, slowly, in the same direction, and I’d like a bit of a sanity check on this claim.
Firstly, I have no problem with the “embodied cognition” idea so far as it relates to human beings (or animals, for that matter). Yes, people think also with their bodies, store memories in the environment, point at things, and so on. This seems to me both true and unremarkable. So unremarkable as to hardly be worth the amount of thought that apparently goes into it. While it may be interesting to trace out all the ways in which it happens, I see no philosophical importance in the details.
Where it goes wrong is the application to AGI that says that because people do this, it is an essential part of how an intellgence of any sort must operate, and therefore a man-made intelligent machine must be given a body. The argument mistakes a superficial fact about observed intelligences for a fact about the mechanism whereby an intelligence of any sort must operate. There is a large and expanding body of work on making ever more elaborate robot puppets like the Nao, explicitly following a research programme of developing “embodied cognition”.
I cannot see these projects as being of any interest. I would be a lot more interested in seeing someone build a human-sized robot that can run unsupported on two legs (Boston Dynamics’ ATLAS is getting there), especially if it can run faster than a man while carrying a full military pack and isn’t tethered to a power cable (not yet done). However, nothing like that is a prerequisite to AGI. I do hold a personal opinion, which I’m not going to argue for here, that if someone developed a simple method of solving the control problems of an all-terrain running robot, they might get from that some insight into how to get farther, such as an all-terrain running robot that can hunt down humans trying to avoid it. Of course, the Unfriendly directions that might lead are obvious, as are the military motivations for building such machines, or inviting people to come up with designs. Of course, these powers will only be used for Good.
Since the embodied approach has been around in strength since the 1980s, and can be found in Turing in 1950, I think it fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
The deaf communicate without sound, the blind without sight, and the limbless without pointing hands. On the internet people communicate without any of these. It doesn’t seem to hold anyone up, except in the mere matter of speed in the case of Stephen Hawking communicating by twitching cheek muscles.
Ah, no, the magic ingredient must be society! Cognition always takes place within society. Feral children are developmentally disabled for want of society. The evidence is clear: we must develop societies of AIs before they can be intelligent.
No, it’s language they must have! AGIs cognition must be based on a language. So if we design the perfect language, AGI will be a snap.
No, it’s upbringing they must have! So we’ll design a robot to be initially like a newborn baby and teach it through experience!
No, it’s....
No. The general form of all these arguments is broken.
This is where you lose me. Isn’t that an equally effective argument against AGI in general?
“AGI in general” is a thing of unlimited broadness, about which lack of success so far implies nothing more than lack of success so far. Cf. flying machines, which weren’t made until they were. Embodied cognition, on the other hand, is a definite thing, a specific approach that is at least 30 years old, and I don’t think it’s even made a contribution to narrow AI yet. It is only mentioned in Russell and Norvig in their concluding section on the philosophy of Strong AI, not in any of the practical chapters.
I took RichardKennaway’s post to mean something like the following:
“Birds fly by flapping their wings, but that’s not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that’s not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently.”
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it’s fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don’t call it “AI”. Even in Turing’s day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of ‘toy projects’ that you use. For the sort of broad definition you are using, where ‘toy projects’ refers literally to toys, Richard Kennaway’s original claim that the embodied approach had only produced toys is factually incorrect. For the definition of ‘toy projects’ that both Richard Kennaway and Document are using, in which ‘toy projects’ is more closely related to ‘toy models’- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?
The “magic ingredient” may be a bridging of intuitions: an embodied AI which you can more naturally interact with offers more intuitive metrics for progress; milestones which can be used to attract funding since they make more sense intuitively.
Obviously you can build an AGI using only lego stones. And you can build an AGI “purely” as software (i.e. with variable hardware substrates). The steelman for pursuing embodied cognition would not be “embodiment is strictly necessary to build AGIs” (boring!), but that “given humans with a goal of building an AGI, going the embodiment route may be a viable approach”.
I well remember that early morning in the CS lab, the better part of a decade ago, when I stumbled—still half asleep—into a sideroom to turn on the lights, only to stare into the eye of Eccerobot (in an earlier incarnation), which was visiting our lab. Shudder.
I used to joke that my goal in life would be to build the successor creature, and to be judged by it (humankind and me both). To be judged and to be found unworthy in its (in this case single) eye, and to be smitten. After all, what better emotional proof to have created something of worth is there than your creation judging you to be unworthy? Take my atoms, Adambot!