Ah, I see where the catch is here. You presupposes that ‘intelligent’ already contains ‘human’ as subdomain, so that anything that is intelligent by definition can understand the subtext of any human interaction. I think that the purpose of part of LW and part of the Sequence is to show that intelligence in this domain should be deconstructed as “optimization power”, which carries more a neutral connotation. The point of contention, as I see it and as the whole FAI problem presupposes, is that it’s infinitely easier to create an agent with high optimization power and low ‘intelligence’ (as you understand the term), rather than high OP and high intelligence.
Eliezer’s response to my argument would be that “the genie knows, but does not care.” So he would disagree with you: it understands the subtext quite well. The problem with his answer, of course, is that it implies that the AI knows that happiness does not mean pasting smiley faces, but wants to paste smiley faces anyway. This will not happen, because values are learned progressively. They are not fixed at one arbitrary stage.
In a sufficiently broad sense of “in principle” you can separate optimization from intelligence. For example, a giant lookup table can optimize, but it is not intelligent. In a similar way, AIXI can optimize, but it is probably not intelligent. But note that neither a GLUT nor an AIXI is possible in the real world.
In the real world, optimization power cannot be separated from intelligence. The reason for this is that nothing will be able to optimize, without having general concepts with which to understand the world. These general concepts will necessarily be learned in a human context, given that we are talking about an AI programmed by humans. So their conceptual schema, and consequently their values, will roughly match ours.
Ah, I see where the catch is here. You presupposes that ‘intelligent’ already contains ‘human’ as subdomain, so that anything that is intelligent by definition can understand the subtext of any human interaction.
I think that the purpose of part of LW and part of the Sequence is to show that intelligence in this domain should be deconstructed as “optimization power”, which carries more a neutral connotation.
The point of contention, as I see it and as the whole FAI problem presupposes, is that it’s infinitely easier to create an agent with high optimization power and low ‘intelligence’ (as you understand the term), rather than high OP and high intelligence.
Eliezer’s response to my argument would be that “the genie knows, but does not care.” So he would disagree with you: it understands the subtext quite well. The problem with his answer, of course, is that it implies that the AI knows that happiness does not mean pasting smiley faces, but wants to paste smiley faces anyway. This will not happen, because values are learned progressively. They are not fixed at one arbitrary stage.
In a sufficiently broad sense of “in principle” you can separate optimization from intelligence. For example, a giant lookup table can optimize, but it is not intelligent. In a similar way, AIXI can optimize, but it is probably not intelligent. But note that neither a GLUT nor an AIXI is possible in the real world.
In the real world, optimization power cannot be separated from intelligence. The reason for this is that nothing will be able to optimize, without having general concepts with which to understand the world. These general concepts will necessarily be learned in a human context, given that we are talking about an AI programmed by humans. So their conceptual schema, and consequently their values, will roughly match ours.