The issue in the OP is that possibility of other situations influences agent’s decision. The standard way of handling this is to agree to disregard other situations, including by appealing to Omega’s stipulated ability to inspire belief (it’s the whole reason for introducing the trustworthiness clause). This belief, if reality of situations is treated equivalently to their probability in agent’s eyes, expels the other situations from consideration.
The idea Paul mentioned is just another way of making sure that the other situations don’t intrude on the thought experiment, but since the main principle is to get this done somehow, it doesn’t really matter if a universal prior likes anti-muggers more than muggers, since in that case we’d just need to change the thought experiment.
Thought experiments are not natural questions that rate usefulness of decision theories, they are tests that examine particular features of decision theories, so if such investigation goes too far afield (as in looking into a priori weights of anti-muggers), that calls for a change in a thought experiment.
Omega inspires belief only after the agent encounters Omega.
According to UDT, the agent should not update its policy based on this encounter; it should simply follow it.
Thus the agent should act according to whatever the best policy is, according to its original (e.g. universal) prior from before it encountered Omega (or indeed learned anything about the world).
I think either:
the agent does update, in which case, why not update on the result of the coin-flip?
or
the agent doesn’t update, in which case, what matters is simply the optimal policy given the original prior.
The issue in the OP is that possibility of other situations influences agent’s decision. The standard way of handling this is to agree to disregard other situations, including by appealing to Omega’s stipulated ability to inspire belief (it’s the whole reason for introducing the trustworthiness clause). This belief, if reality of situations is treated equivalently to their probability in agent’s eyes, expels the other situations from consideration.
The idea Paul mentioned is just another way of making sure that the other situations don’t intrude on the thought experiment, but since the main principle is to get this done somehow, it doesn’t really matter if a universal prior likes anti-muggers more than muggers, since in that case we’d just need to change the thought experiment.
Thought experiments are not natural questions that rate usefulness of decision theories, they are tests that examine particular features of decision theories, so if such investigation goes too far afield (as in looking into a priori weights of anti-muggers), that calls for a change in a thought experiment.
I reason as follows:
Omega inspires belief only after the agent encounters Omega.
According to UDT, the agent should not update its policy based on this encounter; it should simply follow it.
Thus the agent should act according to whatever the best policy is, according to its original (e.g. universal) prior from before it encountered Omega (or indeed learned anything about the world).
I think either:
the agent does update, in which case, why not update on the result of the coin-flip? or
the agent doesn’t update, in which case, what matters is simply the optimal policy given the original prior.