You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there’s some initial value in just trying to name things more precisely though, and painting a target of “we don’t understand this region that has a name now nearly as well as we’d like” on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he’s basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.
And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn’t a question of charlatans or people lacking common sense. People really didn’t seem to break rationality apart into the rule-following (“solve this quadratic equation”) and pattern-recognition (“is that a dog?”) parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.
So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?
You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there’s some initial value in just trying to name things more precisely though, and painting a target of “we don’t understand this region that has a name now nearly as well as we’d like” on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he’s basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.
And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn’t a question of charlatans or people lacking common sense. People really didn’t seem to break rationality apart into the rule-following (“solve this quadratic equation”) and pattern-recognition (“is that a dog?”) parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.
So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?