And following some links from there leads to this 2003 Eliezer posting to an AGI mailing list in which he explains the mirror opinion.
I can’t say I completely understood the argument, but it seemed that the real reason EY deprecates AIXI is that he fears that it would defect in the PD, even when playing against a mirror image—because it wouldn’t recognize the symmetry.
Probably the two most obvious problems with AIXI (apart from the uncomputability business) are that it:
Would be inclined to grab control of its own reward function—and make sure nobody got in the way of it doing that;
Doesn’t know it has a brain or a body—and so might easily eat its own brains accidentally.
I discuss these problems in more detail in my essay on the topic. Teaching it that it has a brain may not be rocket science.
Probably the two most obvious problems with AIXI (apart from the uncomputability business) are that it:
Would be inclined to grab control of its own reward function—and make sure nobody got in the way of it doing that;
Doesn’t know it has a brain or a body—and so might easily eat its own brains accidentally.
I discuss these problems in more detail in my essay on the topic. Teaching it that it has a brain may not be rocket science.