I have discussed this problem with Professor Hutter, and though I wouldn’t claim to be able to predict how he would respond to this dialogue, I think his viewpoint is that the anvil problem will not matter in practice. In rough summary of his response: an agent will form a self model by observing itself taking actions through its own camera. When you write something on a piece of paper, you can read what you are writing, and see your own hand holding the pen. Though AIXI may not compress its own action bits, it will compress the results it observes of its actions, and will form a model of its hardware (except perhaps the part that produces and stores those action bits).
I have discussed this problem with Professor Hutter, and though I wouldn’t claim to be able to predict how he would respond to this dialogue, I think his viewpoint is that the anvil problem will not matter in practice. In rough summary of his response: an agent will form a self model by observing itself taking actions through its own camera. When you write something on a piece of paper, you can read what you are writing, and see your own hand holding the pen. Though AIXI may not compress its own action bits, it will compress the results it observes of its actions, and will form a model of its hardware (except perhaps the part that produces and stores those action bits).