Our best ideas for decision-making under indexical uncertainty (UDT) … involve some kind of priors
I don’t think that UDT is about decision-making under indexical uncertainty. I think that UDT is a clever way to reason without indexical uncertainty.
Suppose that several copies of an agent might exist. “Decision-making under indexical uncertainty” would mean, choosing an action while uncertain about which of these copies “you” might be. Thus, the problem presupposes that “you” are a particular physical instance of the agent.
The UDT approach, in contrast, is to identify “you” with the abstract algorithm responsible for “your” decisions. Since there is only one such abstract algorithm, there are no copies of you, and thus none of the attendant problems of indexical uncertainty. The only uncertainty is the logical uncertainty about how the abstract algorithm’s outputs will control the histories in the multiverse.
That’s a fair point, but I’m not sure it convinces me completely.
Decision-making under Bayesian probability looks like maximizing a certain weighted sum. The weights are probabilities, and you’re supposed to come up with them before making a decision. The AMD problem points out that some of the weights might depend on your decision, so you can’t use them for decision-making.
Decision-making in UDT also looks like maximizing a weighted sum. The weights are “degrees of caring” about different mathematical structures, and you’re supposed to come up with them before making a decision. Are we sure that similar problems can’t arise there?
I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.
I don’t think that UDT is about decision-making under indexical uncertainty. I think that UDT is a clever way to reason without indexical uncertainty.
Suppose that several copies of an agent might exist. “Decision-making under indexical uncertainty” would mean, choosing an action while uncertain about which of these copies “you” might be. Thus, the problem presupposes that “you” are a particular physical instance of the agent.
The UDT approach, in contrast, is to identify “you” with the abstract algorithm responsible for “your” decisions. Since there is only one such abstract algorithm, there are no copies of you, and thus none of the attendant problems of indexical uncertainty. The only uncertainty is the logical uncertainty about how the abstract algorithm’s outputs will control the histories in the multiverse.
That’s a fair point, but I’m not sure it convinces me completely.
Decision-making under Bayesian probability looks like maximizing a certain weighted sum. The weights are probabilities, and you’re supposed to come up with them before making a decision. The AMD problem points out that some of the weights might depend on your decision, so you can’t use them for decision-making.
Decision-making in UDT also looks like maximizing a weighted sum. The weights are “degrees of caring” about different mathematical structures, and you’re supposed to come up with them before making a decision. Are we sure that similar problems can’t arise there?
I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.