And following some links from there leads to this 2003 Eliezer posting to an AGI mailing list in which he explains the mirror opinion.
I can’t say I completely understood the argument, but it seemed that the real reason EY deprecates AIXI is that he fears that it would defect in the PD, even when playing against a mirror image—because it wouldn’t recognize the symmetry.
I have to say that this habit of evaluating and grading minds based on how they perform on a cherry-picked selection of games (PD, Hitchhiker, Newcomb) leaves me scratching my head. For every game which makes some particular feature of a decision theory seem desirable (determinism, say, or ability to recognize a copy of yourself) there are other games where that feature doesn’t help, and even games which make that feature look undesirable. It seems to me that Eliezer is approaching decision theory in an amateurish and self-deluding fashion.
And following some links from there leads to this 2003 Eliezer posting to an AGI mailing list in which he explains the mirror opinion.
I can’t say I completely understood the argument, but it seemed that the real reason EY deprecates AIXI is that he fears that it would defect in the PD, even when playing against a mirror image—because it wouldn’t recognize the symmetry.
Probably the two most obvious problems with AIXI (apart from the uncomputability business) are that it:
Would be inclined to grab control of its own reward function—and make sure nobody got in the way of it doing that;
Doesn’t know it has a brain or a body—and so might easily eat its own brains accidentally.
I discuss these problems in more detail in my essay on the topic. Teaching it that it has a brain may not be rocket science.
It seems to me that Eliezer is approaching decision theory in an amateurish and self-deluding fashion.
Given your analysis I concluded the reverse. It is ‘amateurish’ to not pay particular attention to the critical edge cases in your decision theory. Your conclusion of ‘self-delusion’ was utterly absurd.
The Prisoner’s Dilemma. “Cherry Picked”? You can not be serious! It’s the flipping Prisoner’s Dilemma. It’s more or less the archetypal decision theory introduction to cooperation problems.
And following some links from there leads to this 2003 Eliezer posting to an AGI mailing list in which he explains the mirror opinion.
I can’t say I completely understood the argument, but it seemed that the real reason EY deprecates AIXI is that he fears that it would defect in the PD, even when playing against a mirror image—because it wouldn’t recognize the symmetry.
I have to say that this habit of evaluating and grading minds based on how they perform on a cherry-picked selection of games (PD, Hitchhiker, Newcomb) leaves me scratching my head. For every game which makes some particular feature of a decision theory seem desirable (determinism, say, or ability to recognize a copy of yourself) there are other games where that feature doesn’t help, and even games which make that feature look undesirable. It seems to me that Eliezer is approaching decision theory in an amateurish and self-deluding fashion.
Probably the two most obvious problems with AIXI (apart from the uncomputability business) are that it:
Would be inclined to grab control of its own reward function—and make sure nobody got in the way of it doing that;
Doesn’t know it has a brain or a body—and so might easily eat its own brains accidentally.
I discuss these problems in more detail in my essay on the topic. Teaching it that it has a brain may not be rocket science.
Given your analysis I concluded the reverse. It is ‘amateurish’ to not pay particular attention to the critical edge cases in your decision theory. Your conclusion of ‘self-delusion’ was utterly absurd.
The Prisoner’s Dilemma. “Cherry Picked”? You can not be serious! It’s the flipping Prisoner’s Dilemma. It’s more or less the archetypal decision theory introduction to cooperation problems.