With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it’s not part of the utility function. And what subjective anticipation is in that context is anyone’s guess, but I’d use something like the total measure of the possible worlds that you expect can be possibly controllable by you-that-receives-certain-observations, as this quantity can be used to estimate importance of making optimized decisions from those control sites, as compared to other control sites resulting from receiving alternative observations, which is important in scheduling computational resources for planning for alternative possibilities in advance and coordinating later.
This sense of subjective anticipation also has nothing to do with UDT utility function, although it refers to more than probability distribution, it also needs to establish which you-with-observations can control which possible worlds.
With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it’s not part of the utility function.
No, in the formalism of Wei’s original post it’s all one giant object which is not necessarily decomposable in the way you suggest. But this is probably splitting hairs.
Tentatively agree with your last paragraph, but need to understand more.
Nah, in the formalism of Wei’s original post it’s all one giant object.
It doesn’t read this way to me. From the post:
More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on. [...]
When it receives an input X, it looks inside the programs P1, P2, P3, …, and uses its “mathematical intuition” to form a probability distribution P_Y over the set of vectors for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y() U().
U is still utility without probability, and probabilities come from “mathematical intuition”, which is separate from utility-assignment, which is what I said:
you still have a separate object representing the probability distribution over possible worlds, it’s not part of the utility function
Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.
With UDT utility function, you still have a separate object representing the probability distribution over possible worlds, it’s not part of the utility function. And what subjective anticipation is in that context is anyone’s guess, but I’d use something like the total measure of the possible worlds that you expect can be possibly controllable by you-that-receives-certain-observations, as this quantity can be used to estimate importance of making optimized decisions from those control sites, as compared to other control sites resulting from receiving alternative observations, which is important in scheduling computational resources for planning for alternative possibilities in advance and coordinating later.
This sense of subjective anticipation also has nothing to do with UDT utility function, although it refers to more than probability distribution, it also needs to establish which you-with-observations can control which possible worlds.
No, in the formalism of Wei’s original post it’s all one giant object which is not necessarily decomposable in the way you suggest. But this is probably splitting hairs.
Tentatively agree with your last paragraph, but need to understand more.
It doesn’t read this way to me. From the post:
U is still utility without probability, and probabilities come from “mathematical intuition”, which is separate from utility-assignment, which is what I said:
Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.