I took RichardKennaway’s post to mean something like the following:
“Birds fly by flapping their wings, but that’s not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that’s not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently.”
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it’s fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don’t call it “AI”. Even in Turing’s day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of ‘toy projects’ that you use. For the sort of broad definition you are using, where ‘toy projects’ refers literally to toys, Richard Kennaway’s original claim that the embodied approach had only produced toys is factually incorrect. For the definition of ‘toy projects’ that both Richard Kennaway and Document are using, in which ‘toy projects’ is more closely related to ‘toy models’- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?
I took RichardKennaway’s post to mean something like the following:
“Birds fly by flapping their wings, but that’s not the only way to fly; we have built airplanes, dirigibles and rockets that fly differently. Humans acquire intelligence (and language) by interacting with their physical environment using a specific set of sensors and effectors, but that’s not the only way to acquire intelligence. Tomorrow, we may build an AI that does so differently.”
But since that idea has been around in strength since the 1980s, and can be found in Turing in 1950, apparently it’s fair to say that if it worked beyond the toy projects that AGI attempts always produce, we would have seen it by now.
I think that we have seen it by now, we just don’t call it “AI”. Even in Turing’s day, we had radar systems that could automatically lock on to enemy planes and shoot them down. Today, we have search engines that can provide answers (with a significant degree of success) to textual or verbal queries; mapping software that can plot the best path through a network of roadways; chess programs that can consistently defeat humans; cars that drive themselves; planes that fly themselves; plus a host of other things like that. Sure, none of these projects are Strong AI, but neither are they toys.
This depends on the definition of ‘toy projects’ that you use. For the sort of broad definition you are using, where ‘toy projects’ refers literally to toys, Richard Kennaway’s original claim that the embodied approach had only produced toys is factually incorrect. For the definition of ‘toy projects’ that both Richard Kennaway and Document are using, in which ‘toy projects’ is more closely related to ‘toy models’- i.e.attempts at a simplified version of Strong AI- this is an argument against AGI in general.
I see what you mean, but I’m having trouble understanding what “a simplified version of Strong AI” would look like.
For example, can we consider a natural language processing system that’s connected to a modern search engine to be “a simplified version of Strong AI” ? Such a system is obviously not generally intelligent, but it does perform several important functions—such as natural language processing—that would pretty much be a requirement for any AGI. However, the implementation of such a system is most likely not generalizable to an AGI (if it were, we’d have AGI by now). So, can we consider it to be a “toy project”, or not ?