Our main result implies that if you have an unbounded, perception determined, computable utility function, and you use a Solomonoff-like prior (Solomonoff, 1964), then you have no way to choose between policies using expected utility.
So, it’s within the AIXI context and you feed your utility function infinite (!) sequences of “perceptions”.
I am not sure I understand. Link?
http://arxiv.org/pdf/0907.5598.pdf
So, it’s within the AIXI context and you feed your utility function infinite (!) sequences of “perceptions”.
We’re not in VNM land any more.