Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.
Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.