I’m not trying to claim that AIXI is a good model in which to explore self-modification. My issue isn’t on the agent-y side at all—it’s on the learning side. It has been put forward that there are facts about the world that AIXI is incapable of learning, even though humans are quite capable of learning them. (I’m assuming here that the environment is sufficiently information-rich that these facts are within reach.) To be more specific, the claim is that humans can learn facts about the observable universe that Solomonoff induction can’t. To me, this claim seems to imply that human learning is not computable, and this implication makes my brain emit, “Error! Error! Does not compute!”
I’m not trying to claim that AIXI is a good model in which to explore self-modification. My issue isn’t on the agent-y side at all—it’s on the learning side. It has been put forward that there are facts about the world that AIXI is incapable of learning, even though humans are quite capable of learning them. (I’m assuming here that the environment is sufficiently information-rich that these facts are within reach.) To be more specific, the claim is that humans can learn facts about the observable universe that Solomonoff induction can’t. To me, this claim seems to imply that human learning is not computable, and this implication makes my brain emit, “Error! Error! Does not compute!”