That’s a fair point, but I’m not sure it convinces me completely.
Decision-making under Bayesian probability looks like maximizing a certain weighted sum. The weights are probabilities, and you’re supposed to come up with them before making a decision. The AMD problem points out that some of the weights might depend on your decision, so you can’t use them for decision-making.
Decision-making in UDT also looks like maximizing a weighted sum. The weights are “degrees of caring” about different mathematical structures, and you’re supposed to come up with them before making a decision. Are we sure that similar problems can’t arise there?
I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.
That’s a fair point, but I’m not sure it convinces me completely.
Decision-making under Bayesian probability looks like maximizing a certain weighted sum. The weights are probabilities, and you’re supposed to come up with them before making a decision. The AMD problem points out that some of the weights might depend on your decision, so you can’t use them for decision-making.
Decision-making in UDT also looks like maximizing a weighted sum. The weights are “degrees of caring” about different mathematical structures, and you’re supposed to come up with them before making a decision. Are we sure that similar problems can’t arise there?
I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.