What do you mean? Whatever happens, happens, if you are not deciding. A normative idea of a correct decision can be thought of from the inside, even if it’s generally uncomputable, and so only glimpses of the answer can be extracted from it.
From the outside, counterfactual consequences don’t appear consistent. If the agent actually chooses action A, the idealized UDT-AIXI thingy will see that choosing action B would have given the agent a billion dollars, and choosing C would have given a trillion. Do you see a way around that?
UDT-AIXI could ask which moral arguments the agent would discover if it had more time to think. It won’t of course examine the counterfactuals of a fact known to the context in which the resulting mathematical structure is to be interpreted. You can only use a normative consideration from the inside, so whenever you step outside, you must also shift the decision problem to allow thinking about moral considerations.
What do you mean? Whatever happens, happens, if you are not deciding. A normative idea of a correct decision can be thought of from the inside, even if it’s generally uncomputable, and so only glimpses of the answer can be extracted from it.
From the outside, counterfactual consequences don’t appear consistent. If the agent actually chooses action A, the idealized UDT-AIXI thingy will see that choosing action B would have given the agent a billion dollars, and choosing C would have given a trillion. Do you see a way around that?
UDT-AIXI could ask which moral arguments the agent would discover if it had more time to think. It won’t of course examine the counterfactuals of a fact known to the context in which the resulting mathematical structure is to be interpreted. You can only use a normative consideration from the inside, so whenever you step outside, you must also shift the decision problem to allow thinking about moral considerations.