From the outside, counterfactual consequences don’t appear consistent. If the agent actually chooses action A, the idealized UDT-AIXI thingy will see that choosing action B would have given the agent a billion dollars, and choosing C would have given a trillion. Do you see a way around that?
UDT-AIXI could ask which moral arguments the agent would discover if it had more time to think. It won’t of course examine the counterfactuals of a fact known to the context in which the resulting mathematical structure is to be interpreted. You can only use a normative consideration from the inside, so whenever you step outside, you must also shift the decision problem to allow thinking about moral considerations.
From the outside, counterfactual consequences don’t appear consistent. If the agent actually chooses action A, the idealized UDT-AIXI thingy will see that choosing action B would have given the agent a billion dollars, and choosing C would have given a trillion. Do you see a way around that?
UDT-AIXI could ask which moral arguments the agent would discover if it had more time to think. It won’t of course examine the counterfactuals of a fact known to the context in which the resulting mathematical structure is to be interpreted. You can only use a normative consideration from the inside, so whenever you step outside, you must also shift the decision problem to allow thinking about moral considerations.