The roboticists I know don’t claim to know how to build AGI. Why would they?
Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science—techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying “I am studying things that don’t actually work yet”.
This is probably related to the long “AI winter” caused by the collapse of hype.
It should be noted that the “AI winter” is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.
Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?
Speaking of which, I have a copy of Russell and Norvig’s AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...
Not so, there are lots of problems in CS that you can’t naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that’s pretty void.
Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science—techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying “I am studying things that don’t actually work yet”.
This is probably related to the long “AI winter” caused by the collapse of hype.
It should be noted that the “AI winter” is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.
Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?
Speaking of which, I have a copy of Russell and Norvig’s AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...
Not so, there are lots of problems in CS that you can’t naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that’s pretty void.