You seem to be missing the point. AIXI should be able to reason effectively if it incorporates a solution to the problem of naturalistic induction which this whole sequence is trying to get at. But the OP argues that even an implausibly-good approximation of AIXI won’t solve that problem on its own. We can’t fob the work off onto an AI using this model. (The OP makes this argument, first in “AIXI goes to school,” and then more technically in “Death to AIXI.”)
Tell me if this seems like a strawman of your comment, but you seem to be saying we just need to make AIXI easier to program. That won’t help if we don’t know how to solve the problem—as you point out in another comment, part of our own understanding is not directly accessible to our understanding, so we don’t know how our own brain-design solves this (to the extent that it does).
You seem to be missing the point. AIXI should be able to reason effectively if it incorporates a solution to the problem of naturalistic induction which this whole sequence is trying to get at. But the OP argues that even an implausibly-good approximation of AIXI won’t solve that problem on its own. We can’t fob the work off onto an AI using this model. (The OP makes this argument, first in “AIXI goes to school,” and then more technically in “Death to AIXI.”)
Tell me if this seems like a strawman of your comment, but you seem to be saying we just need to make AIXI easier to program. That won’t help if we don’t know how to solve the problem—as you point out in another comment, part of our own understanding is not directly accessible to our understanding, so we don’t know how our own brain-design solves this (to the extent that it does).