Maybe calls for tabooing terms can become a bit inflationary. However, talking about if and when AI will reach or surpass “human-level intelligence” can misleadingly frame the discussion, so I recommend avoiding that term.
What is “human-level intelligence” and when will AI surpass it?
My understanding is that there is a wide spectrum of human intelligence, and when people are discussing AI capabilities and threats, it is important to know where in that spectrum the AI in question sits.
Moreover, intelligence is not a single-dimensional concept. Sure, there is g etc. - in human brains. But even an ordinary calculator can, well, calculate much faster than humans. For being dangerous, an AI does not need to be better than all people in every discipline. Does the AI have to be a great composer, or art historian? No.
Yet the vagueness of the concept of “human-level intelligence” may lead to further misleading ways of assessing AI capabilities. Firstly, dismissing domain-specific abilities; “sure, this AI is great at explaining jokes but it cannot invent new salsa recipes, whereas this is something that humans can do.” True, but the human brain is quite modular, subsystems do different things, so why should an AI not also use different AIs? Secondly, comparing with the best human experts in some domain; “look, this AI may win a game of Diplomacy, but not against the best players.” Fine, but if human-level intelligence means that you have to be best at everything, then no human has human-level intelligence. This would only be relevant if humanity coordinated itself perfectly.
Additionally, to be dangerous, an AI does not have to be conscious. But this is something else that the term “human-level intelligence” may suggest. Why, if the term does not contain the concept of consciousness? The reason for this is that the lack of concreteness about what human intelligence is could invite people to substitute what they believe exists in everyone’s brain, regardless of their intelligence, and that is consciousness. Saying that AI transcending human-level intelligence is a relevant threshold may sound to people as if the real danger is AIs developing consciousness.
So avoid the term “human-level intelligence” if you can.
Thanks to Justis Mills for feedback on a draft of this post.
Taboo “human-level intelligence”
Maybe calls for tabooing terms can become a bit inflationary. However, talking about if and when AI will reach or surpass “human-level intelligence” can misleadingly frame the discussion, so I recommend avoiding that term.
What is “human-level intelligence” and when will AI surpass it?
My understanding is that there is a wide spectrum of human intelligence, and when people are discussing AI capabilities and threats, it is important to know where in that spectrum the AI in question sits.
Moreover, intelligence is not a single-dimensional concept. Sure, there is g etc. - in human brains. But even an ordinary calculator can, well, calculate much faster than humans. For being dangerous, an AI does not need to be better than all people in every discipline. Does the AI have to be a great composer, or art historian? No.
Yet the vagueness of the concept of “human-level intelligence” may lead to further misleading ways of assessing AI capabilities. Firstly, dismissing domain-specific abilities; “sure, this AI is great at explaining jokes but it cannot invent new salsa recipes, whereas this is something that humans can do.” True, but the human brain is quite modular, subsystems do different things, so why should an AI not also use different AIs? Secondly, comparing with the best human experts in some domain; “look, this AI may win a game of Diplomacy, but not against the best players.” Fine, but if human-level intelligence means that you have to be best at everything, then no human has human-level intelligence. This would only be relevant if humanity coordinated itself perfectly.
Additionally, to be dangerous, an AI does not have to be conscious. But this is something else that the term “human-level intelligence” may suggest. Why, if the term does not contain the concept of consciousness? The reason for this is that the lack of concreteness about what human intelligence is could invite people to substitute what they believe exists in everyone’s brain, regardless of their intelligence, and that is consciousness. Saying that AI transcending human-level intelligence is a relevant threshold may sound to people as if the real danger is AIs developing consciousness.
So avoid the term “human-level intelligence” if you can.
Thanks to Justis Mills for feedback on a draft of this post.