Nah, in the formalism of Wei’s original post it’s all one giant object.
It doesn’t read this way to me. From the post:
More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on. [...]
When it receives an input X, it looks inside the programs P1, P2, P3, …, and uses its “mathematical intuition” to form a probability distribution P_Y over the set of vectors for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y() U().
U is still utility without probability, and probabilities come from “mathematical intuition”, which is separate from utility-assignment, which is what I said:
you still have a separate object representing the probability distribution over possible worlds, it’s not part of the utility function
Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.
It doesn’t read this way to me. From the post:
U is still utility without probability, and probabilities come from “mathematical intuition”, which is separate from utility-assignment, which is what I said:
Wha? The probability distribution given by math intuition isn’t part of the problem statement, it’s part of the solution. We already know how to infer it from the utility function in simple cases, and the idea is that it should be inferrable in principle.
When I read your comments, I often don’t understand what you understand and what you don’t. For the benefit of onlookers I’ll try to explain the idea again anyway.
A utility function defined on vectors of execution histories may be a weighted sum of utility functions on execution histories, or it may be something more complex. For example, you may care about the total amount of chocolate you get in world-programs P1 and P2 combined. This corresponds to a “prior probability distribution” of 50⁄50 between the two possible worlds, if you look at the situation through indexical-uncertainty-goggles instead of UDT-goggles. Alternatively you may care about the product of the amounts of chocolate you get in P1 and P2, which isn’t so easy to interpret as indexical uncertainty.
When you expect almost complete logical transparency, mathematical intuition won’t specify anything more than the logical axioms. But where you expect logical uncertainty, the probabilities given by mathematical intuition play the role analogous to that of prior distribution, with expected utilities associated with specific execution histories taken through another expectation according to probabilities given by mathematical intuition. I agree that to the extent mathematical intuition doesn’t play a role in decision-making, UDT utilities are analogous to expected utility, but in fact it plays that role, and it’s more natural to draw the analogy between the informal notion of possible worlds and execution histories rather than between the possible worlds and world-programs. See also this comment.