Amongst people who actually build robots, it’s generally understood that you don’t get general-purpose AI by creating a ‘general intelligence’ and letting it run; it seems much more likely that we’ll need a lot of small, task-specific systems that can work together.
The roboticists I know don’t claim to know how to build AGI. Why would they?
Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science—techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying “I am studying things that don’t actually work yet”.
This is probably related to the long “AI winter” caused by the collapse of hype.
It should be noted that the “AI winter” is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.
Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?
Speaking of which, I have a copy of Russell and Norvig’s AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...
Not so, there are lots of problems in CS that you can’t naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that’s pretty void.
You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.
You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.
Rather in line with my point. To claim that this is not really related to general-purpose AI, when people who build the closest things to thinking machines that we have would disagree with that sentiment, did not seem warranted. I was showing that the statement was without merit due to informed folks thinking otherwise.
Err, my point is obviously that the AI researchers are too far away from seeing the solution, so their opinion shouldn’t count as anything approaching certainty. This is to point out the falsity of connotation of your original comment. Not to mention that there is actually no consensus among the experts, a factual error in your statement.
Amongst people who actually build robots, it’s generally understood that you don’t get general-purpose AI by creating a ‘general intelligence’ and letting it run; it seems much more likely that we’ll need a lot of small, task-specific systems that can work together.
Amongst people who actually install air conditioners, it’s generally understood that you get general-purpose AI by adding freon.
The roboticists I know don’t claim to know how to build AGI. Why would they?
Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.
Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science—techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying “I am studying things that don’t actually work yet”.
This is probably related to the long “AI winter” caused by the collapse of hype.
It should be noted that the “AI winter” is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.
Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?
Speaking of which, I have a copy of Russell and Norvig’s AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...
Not so, there are lots of problems in CS that you can’t naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that’s pretty void.
You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.
Rather in line with my point. To claim that this is not really related to general-purpose AI, when people who build the closest things to thinking machines that we have would disagree with that sentiment, did not seem warranted. I was showing that the statement was without merit due to informed folks thinking otherwise.
Err, my point is obviously that the AI researchers are too far away from seeing the solution, so their opinion shouldn’t count as anything approaching certainty. This is to point out the falsity of connotation of your original comment. Not to mention that there is actually no consensus among the experts, a factual error in your statement.