This multidimensionality is exactly why I think the term “human-level intelligence” should not be used. My impression is that it suggests a one-dimensional type of ability, with a threshold where the quality changes drastically; and the term even seems to suggest that this threshold to be at a level that is in fact not decisive.
Yes, that’s fair enough. It’s not like we have any examples of systems that have human-level intelligence in a broad context for the term to apply to anyway.
I do still think it’s a useful term for hypothetical discussions, referring to systems that are not obviously subhuman nor superhuman in broad capabilities. It is possible that such systems may never exist. If we develop superintelligence, it may be via systems that are always obviously subhuman in some respects and superhuman in others, or with a discontinuity in capability, or other even stranger possibilities.
This multidimensionality is exactly why I think the term “human-level intelligence” should not be used. My impression is that it suggests a one-dimensional type of ability, with a threshold where the quality changes drastically; and the term even seems to suggest that this threshold to be at a level that is in fact not decisive.
Yes, that’s fair enough. It’s not like we have any examples of systems that have human-level intelligence in a broad context for the term to apply to anyway.
I do still think it’s a useful term for hypothetical discussions, referring to systems that are not obviously subhuman nor superhuman in broad capabilities. It is possible that such systems may never exist. If we develop superintelligence, it may be via systems that are always obviously subhuman in some respects and superhuman in others, or with a discontinuity in capability, or other even stranger possibilities.