Seems to me that part of our concept of “intelligence” is the ability to optimize in many different domains, and another part is antropomorphism—because so far humans are the only example known to be able to optimize in many different domains. Now how do we separate these two parts? Which parts of “what humans do” can be removed while preserving the ability to optimize?
If a machine is able to optimize in many different domains, and if this includes human language and psychology, than the machine should be able to understand what humans ask, and then give them the answers that increase their utility (even correcting for possible human misunderstanding and biases). Seems to me that most people, after talking with such machine, would agree that it is intelligent—by definition it should give them satisfying (not necessarily correct) answers, including answers to questions like “Why?”.
So I think if something is a good cross-domain optimizer, and is able to communicate with humans, humans will consider it intelligent. The opposite direction is a problem, that people may assume something is a necessary part of intelligence (and must be solved when building an AI), despite it is not necessary. In other words, in black-box testing people will report the AI as intelligent, but some of their ideas about intelligence may be superfluous when constructing such AI.
EDIT: Less seriously, your analogy with “magic” works here too—if people will get very satisfying answers to problems they were not able to solve, many of them will consider the machine magical too; and yet they will have bad ideas about its construction.
Seems to me that part of our concept of “intelligence” is the ability to optimize in many different domains, and another part is antropomorphism—because so far humans are the only example known to be able to optimize in many different domains. Now how do we separate these two parts? Which parts of “what humans do” can be removed while preserving the ability to optimize?
If a machine is able to optimize in many different domains, and if this includes human language and psychology, than the machine should be able to understand what humans ask, and then give them the answers that increase their utility (even correcting for possible human misunderstanding and biases). Seems to me that most people, after talking with such machine, would agree that it is intelligent—by definition it should give them satisfying (not necessarily correct) answers, including answers to questions like “Why?”.
So I think if something is a good cross-domain optimizer, and is able to communicate with humans, humans will consider it intelligent. The opposite direction is a problem, that people may assume something is a necessary part of intelligence (and must be solved when building an AI), despite it is not necessary. In other words, in black-box testing people will report the AI as intelligent, but some of their ideas about intelligence may be superfluous when constructing such AI.
EDIT: Less seriously, your analogy with “magic” works here too—if people will get very satisfying answers to problems they were not able to solve, many of them will consider the machine magical too; and yet they will have bad ideas about its construction.