The mixture of skills and abilities that a person has is not the same as the set of skills which could result in the dangers Bostrom will discuss later, or other dangers and benefits which he does not discuss… Systems which are quite deficient in some ways, relative to people, may still be extremely dangerous… “Human-level intelligence” is only a first-order approximation to the set of skills and abilities which should concern us.
I agree, and believe that the emphasis on “superintelligence”, depending on how that term is interpreted, might be an impediment to clear thinking in this area. Following David Chalmers, I think it’s best to formulate the problem more abstractly, by using the concept of a self-amplifying cognitive capacity. When the possession of that cognitive capacity is correlated with changes in some morally relevant capacity (such as the capacity to cause the extinction of humanity), the question then becomes one about the dangers posed by systems which surpass humans in that self-amplifying capacity, regardless of how much they resemble typical human beings or how they perform on standard measures of intelligence.
I agree, and believe that the emphasis on “superintelligence”, depending on how that term is interpreted, might be an impediment to clear thinking in this area. Following David Chalmers, I think it’s best to formulate the problem more abstractly, by using the concept of a self-amplifying cognitive capacity. When the possession of that cognitive capacity is correlated with changes in some morally relevant capacity (such as the capacity to cause the extinction of humanity), the question then becomes one about the dangers posed by systems which surpass humans in that self-amplifying capacity, regardless of how much they resemble typical human beings or how they perform on standard measures of intelligence.