Marcus Hutter’s AIXI is the perfect rolling sphere of advanced agent theory—it’s not realistic, but you can’t understand more complicated scenarios if you can’t envision the rolling sphere. At the core of AIXI is Solomonoff induction, a way of using infinite computing power to probabilistically predict binary sequences with (vastly) superintelligent acuity. Solomonoff induction proceeds roughly by considering all possible computable explanations, with prior probabilities weighted by their algorithmic simplicity, and updating their probabilities based on how well they match observation. We then translate the agent problem into a sequence of percepts, actions, and rewards, so we can use sequence prediction. AIXI is roughly the agent that considers all computable hypotheses to explain the so-far-observed relation of sensory data and actions to rewards, and then searches for the best strategy to maximize future rewards. To a first approximation, AIXI could figure out every ordinary problem that any human being or intergalactic civilization could solve. If AIXI actually existed, it wouldn’t be a god; it’d be something that could tear apart a god like tinfoil.
Further information:
Suggest removing the claim that AIXI is not feasible in practice because it only works with a finite time horizon. This is false—AIXI can be well-defined to infinite horizon with a wide variety of discount factors https://www.sciencedirect.com/science/article/pii/S0304397513007135 and Jan Leike’s thesis treats its computability level in this case (which I believe is not harmed relative to finite horizon, since the damage is already done by the difficulty of computing the interactive version of Solomonoff induction and the extra limit causes no further degradation). The only important difference is that the expectimax expression for AIXI no longer makes sense.