Eliezer’s response to my argument would be that “the genie knows, but does not care.” So he would disagree with you: it understands the subtext quite well. The problem with his answer, of course, is that it implies that the AI knows that happiness does not mean pasting smiley faces, but wants to paste smiley faces anyway. This will not happen, because values are learned progressively. They are not fixed at one arbitrary stage.
In a sufficiently broad sense of “in principle” you can separate optimization from intelligence. For example, a giant lookup table can optimize, but it is not intelligent. In a similar way, AIXI can optimize, but it is probably not intelligent. But note that neither a GLUT nor an AIXI is possible in the real world.
In the real world, optimization power cannot be separated from intelligence. The reason for this is that nothing will be able to optimize, without having general concepts with which to understand the world. These general concepts will necessarily be learned in a human context, given that we are talking about an AI programmed by humans. So their conceptual schema, and consequently their values, will roughly match ours.
Eliezer’s response to my argument would be that “the genie knows, but does not care.” So he would disagree with you: it understands the subtext quite well. The problem with his answer, of course, is that it implies that the AI knows that happiness does not mean pasting smiley faces, but wants to paste smiley faces anyway. This will not happen, because values are learned progressively. They are not fixed at one arbitrary stage.
In a sufficiently broad sense of “in principle” you can separate optimization from intelligence. For example, a giant lookup table can optimize, but it is not intelligent. In a similar way, AIXI can optimize, but it is probably not intelligent. But note that neither a GLUT nor an AIXI is possible in the real world.
In the real world, optimization power cannot be separated from intelligence. The reason for this is that nothing will be able to optimize, without having general concepts with which to understand the world. These general concepts will necessarily be learned in a human context, given that we are talking about an AI programmed by humans. So their conceptual schema, and consequently their values, will roughly match ours.