Whether anthropomorphism in general, or the Detached Lever fallacy in particular, reduced progress in AI by so much as a whole factor of two, is an interesting question; progress is driven by the fastest people. Removing anthropomorphism might not have sped things up much—AI is hard.
However, I would certainly bet that the size of the most exaggerated claims was driven primarily by anthropomorphism; if the culprit researchers involved had never seen a human, it would not have occurred to them to make claims within two orders of magnitude of what they claimed. Note that the size of the most exaggerated claims is driven by those most overconfident and most subject to anthropomorphism.
As you know(?) I feel that if one ignores all exaggerated claims and looks only at what actually did get accomplished, then AI has not progressed any more slowly than would be expected for a scientific field tackling a hard problem. I don’t think AI is moving any more slowly on intelligence than biologists did on biology, back when elan vital was still a going hypothesis. There are specific AI researchers that I revere, like Judea Pearl and Edwin Jaynes, and others who I respect for their wisdom even when I disagree with them, like Douglas Hofstadter.
But on the whole, AGI is not now and never has been a healthy field. It seems to me—bearing in mind that we disagree about modesty in theory, though not, I’ve always argued, in practice—it seems to me that the amount of respect you want me to give the field as a whole, would not be wise even if this were a healthy field, given that this is my chosen area of specialization and I am trying to go beyond the past. For an unhealthy field, it should be entirely plausible even for an outsider to say, “They’re Doing It Wrong”. It is akin to the principle of looking to Einstein and Buffett to find out what intelligence is, rather than Jeff Skilling. A paradigm has to earn its respect, and there’s no credit for trying. The harder and more diligently you try, and yet fail, the more probable it is that the methodology involved is flawed.
Whether anthropomorphism in general, or the Detached Lever fallacy in particular, reduced progress in AI by so much as a whole factor of two, is an interesting question; progress is driven by the fastest people. Removing anthropomorphism might not have sped things up much—AI is hard.
However, I would certainly bet that the size of the most exaggerated claims was driven primarily by anthropomorphism; if the culprit researchers involved had never seen a human, it would not have occurred to them to make claims within two orders of magnitude of what they claimed. Note that the size of the most exaggerated claims is driven by those most overconfident and most subject to anthropomorphism.
As you know(?) I feel that if one ignores all exaggerated claims and looks only at what actually did get accomplished, then AI has not progressed any more slowly than would be expected for a scientific field tackling a hard problem. I don’t think AI is moving any more slowly on intelligence than biologists did on biology, back when elan vital was still a going hypothesis. There are specific AI researchers that I revere, like Judea Pearl and Edwin Jaynes, and others who I respect for their wisdom even when I disagree with them, like Douglas Hofstadter.
But on the whole, AGI is not now and never has been a healthy field. It seems to me—bearing in mind that we disagree about modesty in theory, though not, I’ve always argued, in practice—it seems to me that the amount of respect you want me to give the field as a whole, would not be wise even if this were a healthy field, given that this is my chosen area of specialization and I am trying to go beyond the past. For an unhealthy field, it should be entirely plausible even for an outsider to say, “They’re Doing It Wrong”. It is akin to the principle of looking to Einstein and Buffett to find out what intelligence is, rather than Jeff Skilling. A paradigm has to earn its respect, and there’s no credit for trying. The harder and more diligently you try, and yet fail, the more probable it is that the methodology involved is flawed.